LinkedIn has started rolling out a new generative AI feature for posts on the platform. Not everyone has it yet, and details about transparency, legal information sharing, and using the post to train AI are light.
LinkedIn announced its intention to offer generative AI posts a few months ago. This is not the first generative AI tool released by the organization. It already offers this for ‘collaborative posts’ as well as writing jobs, ads and profile descriptions.
While creating a new post, the tool gives users the option to create a ‘draft’ using AI. It asks users to explain in detail what they mean in the post with examples.
If the AI thinks there is not enough detail, users are prompted to try again:

Image: Steph Clarke
Users have started reporting access to this feature since last week. Currently, it is unclear how these users have been targeted, such as region or account type.
For example, this author has seen posts from users in various parts of Australia with access to the feature but not access in person.
“When it comes to posting on LinkedIn, we’ve heard that you generally know what you want to say, but turning a great idea into a full post can be challenging and time-consuming. So, we’re starting to test a way for members to use generative AI directly in the LinkedIn Share Box,” Karen Baruch, director of product at LinkedIn, wrote in a blog post announcing the new feature.
“Responsible AI is a fundamental part of this process so we are thoughtfully moving forward to test this experience before rolling it out to all of our members.”
LinkedIn will not answer questions about this generative AI feature
This thoughtful approach to AI unfortunately doesn’t extend to questions about rollout. The organization declined to respond of Smart Company question and instead pointed to previous blog posts published related to AI.
It includes posts on its responsible AI principles, its approach to detecting AI-generated profile photos, how it uses AI to protect member data, and how it has put its responsible AI principles into practice.
Some of our questions include possible plans to flag posts written by AI for transparency, measures being taken to ensure the accuracy of posts and not help spread misinformation, and whether these posts will be used to train a large language model (LLM). Powered by LinkedIn’s AI.
On top of that, it was previously reported that LinkedIn used both GPT-4 and GPT-3.5 to create its various AI-powered writing suggestions.
LinkedIn specifically mentions Microsoft’s leadership when it comes to the platform’s responsible AI principles, which makes sense since the tech giant owns the platform. Some of these principles include providing transparency, maintaining trust and accepting responsibility.
Addressing issues surrounding responsible AI
It is worth noting that the Our Responsible AI principles in practice The post addresses some concerns about the use of AI on the platform.
For example, when AI is used to help generate affiliate articles, job descriptions and profile descriptions, the user is made aware of the use of AI to generate these.

Image: LinkedIn
However, the language here is very specific and it is not clear whether the same transparency is afforded to readers of this material. For example, a box explaining that a job, article or profile was written with AI within LinkedIn will also be visible to other LinkedIn members.

Image: LinkedIn
Similarly, this particular blog post goes into detail about trust, accountability and security. Its main focus was on AI governance and LinkedIn’s use of AI to check, balance and potentially bias. And it’s fantastic. These are integral things that any organization using generative AI must do.
But that still doesn’t answer the question of public accountability and transparency once Generative AI is publicly facing the platform.
And that’s a shame. While LinkedIn is making a big deal about transparency with generative AI, why isn’t it clear whether readers, as well as creators, will be made aware of when these tools are implemented?

Image: Steph Clarke
“At a time when LinkedIn is adjusting its algorithms in an effort to increase conversation on the platform, the decision to build in a native tool that creates more generic content seems like an unusual decision,” said Steph Clarke, Melbourne Futurist, S.L.A.martCompany.
Clark is also one of the LinkedIn users who now has access to its Generative AI feature.
“There has been a growing demand for more transparency about how and where AI is used. Atlassian just last week called for traffic light systems and New York to require organizations to declare and audit AI use in hiring decisions.
LinkedIn missed an opportunity to lead this transparency in the social media space when it built this AI post generator into the platform.”
Being the devil’s advocate
While AI is certainly not new, it is still in its infancy when it comes to public use (especially generative AI) and the necessary regulations and responsibilities that come with it.
There are widely warranted fears about job loss, potential bias, the spread of misinformation and the creation of harmful content.
And this is why people are calling for an extra layer of transparency when it comes to AI generated content. And I am one of them.
But I was challenged with a perspective on Clark’s post about LinkedIn’s generative AI rollout. The commenter pointed out that the use of ghostwriters has been common practice for years — from books to even blogs and social media posts — particularly in the business world.
And this is a good point.
Copywriters, PR professionals, and even interns are forever writing words for CEOs. Most professionals know this and yet disclosure is never expected. And yet many of us are now demanding it with the introduction of widely available generative AI.
Of course, there are other considerations when it comes to humans versus robots. AI illusions and incorrect ‘facts’ are still real problems, for example.
But I’d be lying if I didn’t admit this gave me pause, as do many things in the larger AI debate.