Ask HN: Which skill do you believe will take the longest to be replaced by AI?
There is a lot of debate whether AI will surpass humans in all economically viable skills (AGI, by one definition). Regardless of whether this will happen, or when, many people already have lost their jobs in part due to the emerging capabilities of AI models, including writing, document analysis, design, art, etc.
This leaves many in a position where they fear they will be next on the chopping block. Many assume physical tasks will take longer since it will take longer to build up, verify and test humanoid robots vs. some virtual AI agent. However, many believe the writing is on the wall either way, and those in domains involving using their hands or bodies will only have a few more years than the formerly employed white-collar class.
Which skills then, or combinations of skills, do you believe will be safest for staying employed and useful if AI continues improving at the rate it has been for the past few years?
In software? I don't see how high stakes infrastructure changes can be done by current AI technologies. It's different from programming in that you might only get one chance to do the things that you need to do in the right order with the right configuration for your environment. You might use AI for initial brain storming, but then you need to dig in to the official documentation for your specific version of e.g. a database, and then validate each individual step, and so on.
Specialist skills. AI is trained on data. Data pulls it towards the average. It does a good job, but only good. If great work is valuable, there's still a need for specialists.
Like right now, native mobile jobs are mostly unaffected by AI. Gemini, despite all the data in the community, doesn't do a decent job at it. If you ask it to build an app from scratch, the architecture will be off. It'll use an outdated tech stack from 2022. It will 'correct' perfectly good data to an older form, and if you ask it to hunt for bugs in cutting edge tech, it might rip out the new one and replace it with old stuff. It often confuses methods that are common between different languages like .contains().
But if very high quality data is easily accessible, e.g. writing, digital art, voice acting, etc, that makes it viable to be cloned by AI. There's little animation data and there's even less oil painting data - so something like oil painting will be greatly more resistant than digital art. It's top tier on Python and yet it struggles with Ren'Py.
Anthropic released experiment results from getting it to manage a vending machine: https://www.anthropic.com/research/project-vend-1
This is a fairly simple task for a human, and Claudius has plenty of reasoning and financial data. But it can't reason its way into running a vending machine because it doesn't have data on how to run vending machines.
The phrase “outdated tech stack from 2022” is kind of hilarious
Out of date after only 3 years?
As one using tech mostly laid down in the last millennium, I also smiled at this.
But I think what the poster meant was that the AI is not always "up to date". So if my C compiler is a 2025 version, and my code makes use of features added since 2022, then the AI can "retrogress" the code simply because it isn't aware of the new.
Or another example, imagine when JavaScript Promises were new. There's a lot more examples of "not using promises" than "using promises ". So the AI us likely to use the old pattern.
If you're doing "up to the minute " code because you put the effort in to keep abreast of new stuff, then this retrograde will seem, well, frustrating.
I feel sadly that this may lead to much information normally available for free to be guarded more tightly, as the opportunity cost for not putting a barrier between valuable data and access to it increases with time.
To keep it with Geoffrey Hinton: "Train to be a plumber" https://www.youtube.com/shorts/WF9KEMmOm7k
> There is a lot of debate whether AI will surpass humans in all economically viable skills (AGI, by one definition).
Actually very little debate. We get a lot of unsubstantiated hype from companies like OpenAI, Anthropic, Google, Microsoft. So-called AI has barely made a dent in economic activities, and no company makes money from it. Tech journalism repeatedly fails to question the PR narrative (read Ed Zitron).
> Regardless of whether this will happen, or when, many people already have lost their jobs in part due to the emerging capabilities of AI models…
Consider the more likely explanation: many companies over-hired a few years ago and have cut jobs. Focus on stock price in an uncertain economy leads to layoffs. Easier to blame AI for layoffs than admitting C-suite management incompetence. Fear of the AI boogeyman gives employers the upper hand in hiring and salary negotiations, and keeps employers in line out of fear.
> Actually very little debate. We get a lot of unsubstantiated hype from companies like OpenAI, Anthropic, Google, Microsoft
Would you really consider the Nobel laureates Geoffrey Hinton¹, Demis Hassabis² and Barack Obama³ not worth listening to on this matter? Demis is the only one with ulterior motives to hype it up, but compared to normal tech CEOs he certainly has quite a bit of proven impact (Alphafold, AlphaZero) to be worth listening to.
> AI has barely made a dent in economic activities
AI companies' revenues are growing rapidly, reaching the tens of billions. The claim that it's just a scapegoat for inevitable layoffs seems fanciful when there are many real-life cases of AI tools performing equivalent person-hours work in white-collar domains.
https://www.businessinsider.com/how-lawyer-used-ai-help-win-...
To claim it is impossible that AI could be at least a partial cause of layoffs requires an unshakable belief that AI tools could not even be labor-multiplying (as in allowing one person to perform more work at the same level of quality than they would otherwise). To assume that this has never happened by this point in 2025 requires a heavy amount of denial.
That being said, I could cite dozens of articles, numerous takes from leading experts, scientists, legitimate sources without conflicts of interest, and I'm certain a fair portion of the HN regulars would not be swayed one inch. Lively debate is the lifeblood of any domain that prides itself on intellectual rigor, but a lot of the dismissal of the actual utility of AI, the impending impacts, and its implications feels like reflexive coping.
I would really really love to hear an argument that convinced me that AGI is impossible, or far away, or that all the utility I get from Claude, o3 or Gemini are all just tricks of scale and memorization entirely orthogonal to something somewhat akin to general human-like intelligence. However, I have not heard a good argument. The replies I get seem to be largely ad-hominems toward tech CEOs, dismissive characterizations of the tech industry at large, and thought-terminating quips that hold no ontological weight.
1: https://www.wired.com/story/plaintext-geoffrey-hinton-godfat... 2: https://www.axios.com/2025/05/21/google-sergey-brin-demis-ha... 3: https://www.youtube.com/watch?v=72bHop6AIcc 4: https://www.cio.com/article/4012162/ai-begins-to-reshape-the...
I don't think it really matters in the end. For example if AI takes over enough jobs or roles in general, if you train to be a plumber, who is going to pay you to do any plumbing?
Masses losing their job due to AI (or for whatever reason) will have a widespread effect on every other sector, because at the end of the day huge part of the economy is based on people just spending their money.
Massage. Anything else where literal human touch is part of the value.
Plumbing.
Embalming and funeral direction.
Childrearing, especially of toddlers. Therapy for complex psychological conditions/ones with complications. Anything else that requires strong emotional and interpersonal judgement and the ability to think outside the box.
Politics/charisma. Influencers. Cult leaders. Anything else involving a cult of personality.
Stand up comics/improv artists. Nobody’s going to pay to sit in a room with other people and listen to a computer tell jokes.
World class athletes.
Top tier salespeople.
TV news and game show etc hosts.
Also note that a bunch of these (and other jobs) may vanish if the vast majority of the population is unemployed and only a few handfuls of billionaires can afford to pay anyone for services.
I’d also note that a lot of jobs will stay safe for much longer than we fear if AI continues to be unable to actually reason and can only handle patterns / extrapolations of patterns it’s already seen.
"Key Account Sales" are difficult to replace because these sales experiences are not documented and depend on many factors in the field. They are not used as input for LLM. Therefore, LLM does not have the knowledge of "Key Account Sales", and this skill is very valuable.
I think the hardest skills for AI to replace will be those that require human trust, emotional intelligence, and real-time physical adaptability.
For example, Conflict resolution, therapy, and coaching depend on nuance, empathy, and trust.
Skilled trades like plumbing, electrical work, or HVAC repair, Auto mechanics, Elevator technicians.
Roles like physical presence plus knowledge Emergency responders (firefighters, EMTs), Disaster relief coordinators.
Aside from helping with the administrative part of the work like drafting docs or formatting excel files, it's hard to imagine an LLM that does a salesman's job better than a person, specifically regarding the sale of products with long sales cycles.
Working with business people, solving XYProblems and figuring out the “what needs to be done”. Telling computers what to do has always been the easy problem for 90% of what software developers do.
That’s why if you look at the leveling guidelines for any well known tech company, “codez real gud” only makes a difference between junior and mid level developers. After that it’s about “scope”, “impact” and “dealing with ambiguity”.
Yes I realize that there are still some “hard problems” that command a premium for people to be able to solve via code - that’s the other 10% and I’m being generous.
You have started with the assumption that the business people are not going to either be replaced by AI, or that they won't be directly working with an AI development tool themselves.
I am sure this time it will be different and the “no code” solution will allow non developers to create their own solutions.