I would like to start with that the following are personal opinions, observations, and anecdotes and not a scientific study (alas, no data and not enough inclination for that). I can see the point in regard to the energy consumption and I also find it quite irritating that our social group, Gamers™, seek and encourage the higher use of energy on something as frivolous as graphical fluff (may UE5 be sunsetted). Regarding the necessity, it is very relative. I do not have mobility impairments and can use a broom instead of a vacuum cleaner (I believe the animal companion prefers less noise), and someone whose job and source of income is cleaning would go for the more "human energy"-efficient option. In the case of LLMs, a use case I've seen is job search, a very generic activity with a large amount of text on it. One of the people I know tried to use the free (government-funded) employment assistance services. The meatbags there were nigh useless and apparently could not parse the person's educational background and previous employment, while the positions suggested could have been just randomly pulled from a pool. On the other hand, the chatbot was able to provide the job titles for the desired career direction, what to watch out for in the adverts, how to format the CV, and how to pace the search, so it could be done alongside the ongoing employment without burning out. The LLM also was available at any time and provided responses and feedback promptly. Some people might prefer the LLMs as the pair programmer or a study partner for the same reasons - availability, flexibility, and general familiarity with the relevant field. Granted, they are/should be aware of the possibility of hallucinations and the necessity to check sources. Regarding taxation, at the moment, I would like to see how it is going to go. It is possible to tax the corporations (unlikely may it be) and the "agentic" AI is not able to do most jobs fully (even 2D artists'). And institutional knowledge is a thing that can easily get lost in the layoffs. So, I agree that the lack of employment due to the CEOs' lack of foresight and professional skills is a threat to the livelihoods of their employees and can negatively affect the companies and the end-users in the long run. The most recent case I am aware of is PinkNews going for "reporter-free newsroom" (the CEO is a dumb ****, so expected as much). So, the point being, there are areas where humans perform worse than the genAI, the necessity is relative, and the human CEOs not being concerned with the long-term prospects of their companies or the societal outcome of their decisions is an issue. The not mentioned issue with the LLMs and image generation being widely available is that malicious actors can use them as well, whether it is spear-phishing, various photo editing, or hate speech at scale. At what point an undesirable side effect becomes an inherent feature I cannot tell.