While I get that AI content is going to become more and more common over time, and that trying to fight that flood will very much be like trying to fight a literal flood – utterly useless – I still think this use case, in particular, is a bad idea.
As we reported recently, among its various generative AI experiments, LinkedIn has been developing a new option that would enable you to generate AI posts, which app researcher Nima Owji found in the back-end code of the app.
As you can see in this example, LinkedIn’s AI update assistant, in this early iteration, would prompt you to ‘share your ideas’ in the composer. It would then provide suggestions for a ‘first draft’ of a post.
Well, LinkedIn’s now actually shipped this, with some users now able to access its new AI post generation tool in the app.
As explained by LinkedIn’s Director of Product Keren Baruch:
“When it comes to posting on LinkedIn, we’ve heard that you generally know what you want to say, but going from a great idea to a full fledged post can be challenging and time consuming. So, we’re starting to test a way for members to use generative AI directly within the LinkedIn share box. To start, you’ll need to share at least 30 words outlining what you want to say – this is your own thoughts and perspective and the core of any post. Then you can leverage generative AI to create a first draft. This will give you a solid foundation to review, edit and make your own, all before you click post.”
Ah, so it’s not designed to be used as a tool to, like, fake that you know what you’re talking about, only to help you pretend that you’re able to articulate your thoughts in a coherent manner.
Makes sense, especially for a platform on which people are trying to display their professional skills and competencies – why not make it easier for them to just churn out opinions and perspectives that don’t reflect their own knowledge or understanding?
This is my key concern with LinkedIn’s generative AI post prompts, that it’s going to enable people to create a misrepresentation of who they are, and what they know, by making it incredibly easy to just fake it, post, and move. And with recruiters often assessing people’s LinkedIn presence within their candidate research, that’s, potentially, going to be a big problem, which could lead to disastrous interviews, misguided connections, and even bad hires as a result.
Of course, there’s a lot more to locating and hiring talent than just assessing their LinkedIn presence, and as Baruch notes, you do have to put down, like, 30 words first, so it’s not all AI generated, either way.
But the precedent here is not good - LinkedIn’s basically telling people to use AI generated posts, which takes the ‘social’ element out of ‘social media’ (as you’re no longer interacting with a human), while also inviting fakers and scammers to just tap on through, and pretend they’re someone that they’re not.
Like, surely there’s already enough ‘hustle culture’ fakers in the app, right?
In amongst LinkedIn’s various new generative AI elements, including AI-generated profile summaries, AI-assisted job descriptions, generative AI messages for job candidates, and an AI InMail assistant, this one is the worst.
It’s one thing to concede that more and more machine-generated content is going to be coming across our screens, but it’s another to encourage it – and again, LinkedIn should be where people are presenting their professional insights and knowledge.
This, in my view, could significantly devalue this element.
But it’s here, and it’s being tested with a small group of users, before a wider roll-out. Recruiters – good luck.