With Matt Burney, Senior Strategic Advisor at Indeed
Below is an automated transcript of this episode
Stephanie Harris-Yee 00:17
Hello, my name is Stephanie Yee and I’ll be your host today on this episode of Global Ambitions. Our guest today is Matt Burney, and he is the Senior Strategic Advisor at Indeed, and our topic today is a little bit different. We’re talking about AI ethics in global hiring. So, Matt, welcome to the program.
Matt Burney 00:37
Hey, nice to meet you.
Stephanie Harris-Yee 00:39
Usually on this podcast, we’re talking about localization and the international business space, and we talk a lot about AI in translation or AI in content creation, but today is a slightly different point of view, talking about talent acquisition, so I’m very intrigued. Tell me a little bit more about how AI is being used in talent acquisition.
Matt Burney 00:58
Well, I think it’s no understatement to say that AI has radically changed the way that we all work. If we were having this conversation this time last year, open AI would have launched chat GPT, what a week or so ago, two weeks ago. We’re only 12 months down the road, and look at where we are already, though the world of AI has been around for a really long time. I’ve been talking about AI to people who didn’t want to listen to me since about 2015. And most of the stuff that is available now has been around for a really, really long time. In fact, when we talk about AI, a lot of the stuff that we’re talking about really is GPTs or large language models, whereas the backbone of AI is what we indeed have been using for a really long time to help with search and matching and all of that kind of stuff. So AI has been in the background really subtly changing the way that we interact, the way that we hire people. But where we’ve seen the introduction of large language models the likes of chat, GPT, Bard, Bing, whatever your large language model of choice is those have become very readily available since this time last year, essentially.
Very rapidly, we’ve seen an uptake where people are saying I want to do something with it. And let’s face it, in the world of talent and hiring, we suffer very badly with what is known as shiny new toy syndrome. We love the new stuff and we love to go out and play with it, which I think is great, and shows how good the HR and talent space is. But we’ve sort of rushed out and started already looking at AI and how we can integrate it, and it’s already in process. But the very public part of that is what we’re seeing change quite rapidly. So the very obvious use case that we’ve seen I think everybody in HR recruitment will definitely have seen rewriting job ads, job descriptions, job ads. That’s the most obvious use case and I’m yet to come across somebody who works in hiring who hasn’t at least played with that. I guess the problem there is that’s precisely the problem. They are playing with something. It’s not necessarily building something that’s a lasting tool, but we’re starting to see that change quite rapidly.
Lots of companies I work with are now looking at their vendor market and saying do our vendors actually think AI first? Are they actually thinking about how they’re going to integrate artificial intelligence into the products that they sell us, into applicant tracking into sourcing, into CRM, all of these sorts of things, and also things like candidate comms how do we actually talk to people? One of the biggest problems that we’ve always had within hiring is the recruitment black hole. You apply for a job and you hear nothing back. That’s the worst thing you can get. AI is really good at filling up the black hole by providing you with instant communication, meaningful stuff, things that actually really help you move along that decision-making journey as a candidate.
So we’ve seen lots and lots of pockets of use all over the place. We’re starting, even only after a year to see some consolidation, some settling down and people actually saying well, actually we need to look at the big providers and those vendors that we work with really closely and see what they’re doing to move us forward, to help us be better communicators, to save us time and to be more effective. If you look at studies around talent acquisition, the really worrying STAT is that about 14 and a half hours a week are wasted in talent acquisition teams on manual tasks. Nobody gets hired to do a manual task. That’s the kind of stuff that we really should be looking at AI to automate and make us better.
Stephanie Harris-Yee 04:45
So for that, as a global company, a company that’s hiring people across borders in multiple languages my team is mostly in Europe and I’m in the US that kind of thing. How is AI, I guess you could say, impacting or improving those sorts of team building? Is it different globally versus, say, like in a local use case?
Matt Burney 05:08
I think the usage and the utility of AI has been pretty ubiquitous here. It works really well right the way across the board. When we’re looking at ways that people are employing it, I haven’t seen massive differences in the way that people are using things across different geographies. I have seen big differences in take-up rates. So if you look at where we are in the UK versus the US, for example, I’m continually surprised to see the US is actually slower adopting a lot of things than we are in the UK. If you look at the UK versus Europe, if you go to Northern Europe so I was lucky enough to spend some time speaking at an event in Sweden quite recently. Everybody was talking about AI. Everybody’s employing AI great. Same thing goes to the Netherlands, but we see a little bit more resistance in France and Spain and Italy and things like that. So there are nuances around usage. That’s interesting because if we as a business want to roll something out that is that everybody’s going to use, we need to understand the adoption rates that people have in different locations. Are they actually all playing in the same field?
Some of the really exciting stuff that we’re seeing is when you’re thinking about translation. Translation is a big issue for a lot of people. Some of the great tools that are out there will allow you to do translation things like HeyGen. So HeyGen does real-time translation, so you can record a video, press a button. It’ll instantaneously translate that into any language and it’ll also sink your lips up to look like you’ve spoken that as well. That’s really cool. Super, super useful to people you know. HeyGen have just launched a new avatar generation tool as well, so you can create an avatar. You can translate anything to any language. So things like global communication are becoming a lot easier for people who perhaps haven’t done that very well in the past.
And I think there’s two things in that. When we think about people who are not the most verbose or articulate people you may not be a great writer of emails, you may be terrible at filling in feedback forms or things like that AI is a really, really great opportunity for you to go and write a really compelling piece of content. If you’re not naturally good at that, you should always edit it and always, you know, don’t just trust the AI to do it for you but it helps people be more creative. Where I’ve seen that be really, really effective is if you’re trying to be creative, but you also need to communicate to somebody in a language that isn’t your own. So I actually be able to go to an AI and say I need to write this email.
That is about a difficult thing that we’re doing in a project, but I need to send it to somebody who is in China and I don’t speak Chinese. How do I make that compelling and how do I make that work in both a business language but also in a way that is non-confrontational, something like that? That sort of stuff is really compelling. It’s amazing that we can do that right now, because 12 months ago I wouldn’t be able to go and do that.
Stephanie Harris-Yee 07:59
So speaking of which, talking about dubbing and speaking in a language that you are not actually speaking? I’ve heard a lot of talk about the ethics behind that sort of false presentation, or however you want to call it. So what are some of those ethical considerations about using AI in these different ways that you’ve seen?
Matt Burney 08:19
Well, it’s always that question with AI or any kind of automation tool. Just because you can, does it mean you should? I think that’s the thing that every organization and individual needs to think about first. This is all really super easy stuff. Anyone can kind of log into things. Broadly, it’s all free and don’t forget, if it’s free, you’re the product. Everybody should always remember that one. You’re there to go train that model. So when you’re putting your face into something and the example of that HeyGen thing I don’t know the policy behind HeyGen, but at the end of the day, you’re recording your face, you’re recording your voice and you’re putting that into an artificial intelligence tool which is going to be caching that at some point. So it’s going referencing your expression, your face, your voice, and it’s probably adding that into its database of things. So it’s a question of do you want to be doing that? And actually, as an organization, should you be encouraging that for your employees?
Personally, I think that if you’re being ethical as an employer, you probably should put some guardrails up around that, because you want to make sure that people understand exactly what it is they’re getting into, why they’re getting into it. It’s a bit like the use of any AI tool, you have to be very careful about what you put into it. You have to be intentional about what you’re doing. You have to be aware of privacy concerns. You should also be looking at your data constraints.
From country to country, the laws around what you can do around AI usage in New York are different to what they are in California. In the same country you have different laws, but in the UK versus Europe, you’ve got different laws across a very short stretch of water. You go out to the Middle East and further. It’s kind of like the Wild West. You can do anything. Really. There aren’t particularly a lot of governance and laws around it. So, as employers, I think there’s a true responsibility to sit there and say why are we doing it, what are the outcomes of this, what’s our expectation of this? And also, are we meeting our regulatory and legal demands? Because those should be your first consideration.
There is a very strong argument and I know a number of companies that are doing this right now to go out and have your own AI tool, to have an AI that exists within your business, sits over all of your business stuff and it never goes external. It doesn’t share anything externally. If you look at Amazon Q, which is released this week, Amazon Q looks at all your internal data and provides a GPT for your employees. Great, that’s really cool. It’ll do language translation in that as well. So if there’s something that you want to look at that’s in a language that you don’t understand, it’ll translate that for you. Great. But it is a lockdown, private GPT, and that’s important if you want to be conserving all of your data and actually being a responsible employer.
Stephanie Harris-Yee 11:07
Okay. So I guess, going into the future here, what do you see happening? Do you see it mostly moving towards these private companies-specific AI programs, or do you see it regulation stepping in and helping to regulate some of the open source models so it is more secure? Or I don’t want to limit your ideas of the future. What are your ideas of the future?
Matt Burney 11:35
It’s a piece of string, right, it’s so hard to predict and I think you know being in the habit of prediction is not was a great place to be, because you’re probably gonna be wrong, right? I know a number of relatively small companies that literally folded within the last couple of weeks because chat GPT launched their new model and said, hey look, you can go and build your own GPT. And they literally turn around and when I’ve spent a year trying to build something that does that and now you give it away for free. So this is the problem where we think about where are we going to be with AI? We don’t know what’s coming in the very short term. When we look at things like GPT 5, we’re at 4 turbo, which is the worst name ever. God, why is that important? But we’re at 4 turbo, we’re gonna be at 5, and when we get to 5, 5 is gonna be very much video based, from what we understand. So all the stuff we’re doing right now where we’re saying, oh, you know, it writes code for me and it does this, it does this, it does this and I have to type everything in, very likely you’re not gonna have to type everything at GPT. It’s got speech to text. So if it’s got speech to text, very likely you’re probably gonna wind up with some sort of avatar you can go and talk to. That’s probably the way you’ll interface with it. It seems to be that’s where they they might be going. I don’t know, I’m not in no way affiliated to open AI. So yeah, you know it’s super hard to go and look at where we’re gonna be.
There’s gonna be a lot of companies who are saying let’s get our own AI’s and let’s kind of lock those down and we’re already moving towards a kind of rentism view. If you have GPT 4 you’re paying 20 bucks a month for it and you automatically have an advantage over somebody who’s using the free version, and that’s kind of where we’re gonna be. There is a real democratization problem around that. If we were more egalitarian about all of this, everybody should be able to have their own AI. Everybody should be able to have their own GPT. It doesn’t cost very much to do that. You can run it on a mobile phone and you can build it yourself. People should be doing that, but realistically, we’re all quite lazy and we just like things that come on our phones right. So we’re very likely to see it being this kind of rentism version where we’re going to get AI’s that are probably quite bespoke to us. We’re already seeing that.
There’s a shopping app that I saw that come in trying to think who put it out there? Mastercard, they’ve got Shopping Muse. Yeah, AI powered personalized shopper that goes with your mastercard. But this is very likely where we’re gonna start going, something that is very personalized to each of us. We’re probably going to wind up initially in the relatively short to mid term with companies doing GPT. Is that a companywide? So we go and interact with a GPT. I think that’s a really good move.
The next stage, along from that, will likely be that we’re all going to have a co-pilot, but our company likes us to use the co-pilot future is the future that we’re going to have.
I think everybody who’s using AI in any variety at the moment is treating it like a copilot. You know it’s the thing you’re going you use to check your emails, read Okay, If you’re doing creative writing, it might be the prompt starter or the finisher. It’s the thing that helps you put together your podcast. If you’re a podcaster, it sits to your side and it helps you out is where we are right now, but that will become a lot more. It’ll become much more integral into our day-to-day and I think businesses are going to start looking at how do we provide a good, solid co-pilot foundation. Obviously, Microsoft co-pilot coming out very shortly. That’s the first iteration of that, but I think pretty much everybody will be doing a similar kind of thing in the very well, I’d like to say long term, but it’s probably going to be short to medium term, because that’s the place we’re moving out.
Stephanie Harris-Yee 15:46
Yeah, things move so fast, so fast. Well, I think we’re running out of time for today, but thank you so much for coming on this show and this has been very, very enlightening and interesting, and I’m sure our listeners will enjoy it as well.
Matt Burney 16:00
Yeah, it was all good.
Senior Strategic Advisor at Indeed