Skip to content
View in the app

A better way to browse. Learn more.

Obsidian Forum Community

A full-screen app on your home screen with push notifications, badges and more.

To install this app on iOS and iPadOS
  1. Tap the Share icon in Safari
  2. Scroll the menu and tap Add to Home Screen.
  3. Tap Add in the top-right corner.
To install this app on Android
  1. Tap the 3-dot menu (⋮) in the top-right corner of the browser.
  2. Tap Add to Home screen or Install app.
  3. Confirm by tapping Install.

AI and ethics (or lack of)

Featured Replies

As so happens, I sometimes read stuff on the internet. This article caught my interest and made me think about a number of related questions

This monkey selfie will protect you from AI slop

My own interest in AI is mostly in image generation (I was a fan of old Clyde Caldwell and Boris Vallejo paperback covers since the dawn of times, to the point of buying official prints from those artists official sites). First Stable Diffusion and now Flux are my hobbies for creating my weird fantasy images (yes, that includes bikini chaimail too). I would love to branch into short AI generated videos some day, but, that is also the extent of my interest in AI. I once tried it on my CV and I didn't recognize myself, so went screw this, my own words describe me better

The BBC article is long and covers a number of areas. Like who owns the results of AI ouput (including some rulings from the US legal system, stating that nobody owns something generated by AI).

It made me think back to recent movies featuring dead actors recreated by AI. Does the movie company own the likeness of an actor 500 years after their death too? What happens to a poor sucker born in 200 years time, who just happens to look like their ancestor? If your homework was solved by AI, was it your solution or those who trained the AI? Do we need humans? Luckily I am old enough to not care, but I figure the last question could be relevant for younger people grin

Edit: Some experiences with AI shows it lies blatantly and doubles down on lies. Sounding very confident in its own words too while being obviously wrong. Then there is the whole hate speech and prejudice issue. If AI is the sum of human writing (the internet), who is to blame for obvious bias and racism in AI generated output?

“He who joyfully marches to music in rank and file has already earned my contempt. He has been given a large brain by mistake, since for him the spinal cord would surely suffice.” - Albert Einstein
 

Spoil me, did the monkey wind up having copyright of his own likeness

Flawed builders build flawed buildings. How can you possibly parse "the sum of human writing" and fact check it? Then who checks the checker?

If possible, the AI bots should be shot on sight

"Akiva Goldsman and Alex Kurtzman run the 21st century version of MK ULTRA." - majestic

"you're a damned filthy lying robot and you deserve to die and burn in hell." - Bartimaeus

"Without individual thinking you can't notice the plot holes." - InsaneCommander

"Just feed off the suffering of gamers." - Malcador

"You are calling my taste crap." -Hurlshort

"thankfully it seems like the creators like Hungary less this time around." - Sarex

"Don't forget the wakame, dumbass" -Keyrock

"Are you trolling or just being inadvertently nonsensical?' -Pidesco

"we have already been forced to admit you are at least human" - uuuhhii

"I refuse to buy from non-woke businesses" - HoonDing

"feral camels are now considered a pest" - Gorth

"Melkathi is known to be an overly critical grumpy person" - Melkathi

"Oddly enough Sanderson was a lot more direct despite being a Mormon" - Zoraptor

"I found it greatly disturbing to scroll through my cartoon's halfing selection of genitalias." - Wormerine

"I love cheese despite the pain and carnage." - ShadySands

  • Author
1 hour ago, PK htiw klaw eriF said:

If possible, the AI bots should be shot on sight

I just killed one in this very thread 😂

“He who joyfully marches to music in rank and file has already earned my contempt. He has been given a large brain by mistake, since for him the spinal cord would surely suffice.” - Albert Einstein
 

This thread inspired me to change my signature.

Thou shalt not make a machine in the likeness of a human mind.

From quickly skimming through the article, it is solely about copyright rather than the environmental impact, employment, and the developers' ability to troubleshoot AI-generated code.

As a man with an MSc in Data Science who does not work with genAI (not because I do not want to), I unironically love the topic. The 2D artists, whose livelihoods were affected by it, have significantly stronger feelings. Their logic is that whether or not you would have paid a person for the work, the models you've used likely were trained on their work. I would say that (I think) the labour for the sake of it, i.e. if it does not produce anything or gain skills (or at least some satisfaction or financial compensation), is soul-crushingly pointless and has no inherent value.

There are several aspects which when combined might make one less comfortable. LLMs (Large Language Models) are trained to be extremely confident, yet supportive and go with the user's suggestions, because humans perceive confidence as knowledge (and there is some link between the eloquence and the perceived intelligence, which negatively affects primarily immigrants). LLMs are prediction models and do not possess any "ground truth", just a lot of data with different weights attached. They can work well for data summarisation or for some generic data, but less so for the niche subjects (and if you are unfamiliar with the field, you might not be able to spot errors).

At the moment, the older models are provided for free to build reliance on them, as the skills unused deteriorate, so it is expected to lead to dependence (I can tell that I cannot easily multiply 3+ digit numbers without writing them down). You can see the similar pattern (en****tification) of building a user base, then extracting value from it in the other industries, such as video streaming.

For software development in particular, I've been told that Claude Opus is a fantastic tool to use. The gotcha being that the developers must understand and be able to troubleshoot the code it generates, otherwise, the software will be impossible to support long-term.

There are some other drawbacks and use cases and most are summarised in Abigail Thorn's video (1h): https://www.youtube.com/watch?v=AaU6tI2pb3M


If you do need an LLM in your life, I would still suggest running one locally (can be done with a £700 Mac Mini M2).

8 hours ago, PK htiw klaw eriF said:

If possible, the AI bots should be shot on sight

You mean retired

Why has elegance found so little following? Elegance has the disadvantage that hard work is needed to achieve it and a good education to appreciate it. - Edsger Wybe Dijkstra

13 hours ago, Gorth said:

My own interest in AI is mostly in image generation (I was a fan of old Clyde Caldwell and Boris Vallejo paperback covers since the dawn of times, to the point of buying official prints from those artists official sites). First Stable Diffusion and now Flux are my hobbies for creating my weird fantasy images (yes, that includes bikini chaimail too). I would love to branch into short AI generated videos some day, but, that is also the extent of my interest in AI. I once tried it on my CV and I didn't recognize myself, so went screw this, my own words describe me better

Ooookay...apologies in advance, because this is probably going to seem unnaturally harsh, but I wish people who claim to love artists would follow that supposed love into not supporting LLMs that steal their work with no compensation all so people can churn out slop that looks vaguely like their work, thus devaluing the work of the actual artist.

Let's be clear, LLMs aren't AI as they don't 'know' or 'think' and they only exist throught theft of peoples' hard work. That's not getting into the environmental, electrical grid or quality of life if you live near one issues which are all significant.

Right now, there is not, in my opinion, an ethical way to engage with these commercial LLMs.

I cannot - yet I must. How do you calculate that? At what point on the graph do "must" and "cannot" meet? Yet I must - but I cannot! ~ Ro-Man

Yes indeed. There really isn't a 'moral' way to use LLM's. Even non commercial use has the environmental and other* issues. Sadly typical that it's allowed/ legal in the first place really.

If you or I ignore copyright, we get in trouble. If some multi billion dollar corp does, it's fine. Indeed, many of the same corporations flagrantly stealing** other people's work for profit are the same ones that lobbied for harsh copyright and patent rules, and litigate stridently under their aegis. When it's their IPs being violated, at least.

Doesn't help that the enforcement companies have absolutely no balls either. They'll happily go after some store playing commercial radio for royalties due, but when an 'AI' vendor hoovers it all up for commercial repackaging... crickets. Just a bit too hard for them, intimidating Meta/ MS/ OpenAI/ Musk's lawyers instead of Archie's Burger's owner operator, I guess.

*flabbers still well and truly ghasted that our government here thinks 10% of our electricity supply for up to 50 (fifty) jobs at an AI centre is a good investment. It's going to drive up electricity prices for everyone else, for basically no benefit. Except, perhaps, some politicians post political career board prospects.

**it isn't of course, it's copyright infringement, but since many of the same corps sponsored those obnoxious "you wouldn't download a car!!!" type ads it's fair game to incorrectly use it back at them

10 hours ago, PK htiw klaw eriF said:

If possible, the AI bots should be shot on sight

The problem aren't the bots, it's the people who set up and run them...

Quote

Against stupidity we have no defense. Neither protests nor force can touch it. Reasoning is of no use. Facts that contradict personal prejudices can simply be disbelieved - indeed, the fool can counter by criticizing them, and if they are undeniable, they can just be pushed aside as trivial exceptions. So the fool, as distinct from the scoundrel, is completely self-satisfied. In fact, they can easily become dangerous, as it does not take much to make them aggressive. For that reason, greater caution is called for than with a malicious one. Never again will we try to persuade the stupid person with reasons, for it is senseless and dangerous.

21 hours ago, Gorth said:

As so happens, I sometimes read stuff on the internet. This article caught my interest and made me think about a number of related questions

This monkey selfie will protect you from AI slop

My own interest in AI is mostly in image generation (I was a fan of old Clyde Caldwell and Boris Vallejo paperback covers since the dawn of times, to the point of buying official prints from those artists official sites). First Stable Diffusion and now Flux are my hobbies for creating my weird fantasy images (yes, that includes bikini chaimail too). I would love to branch into short AI generated videos some day, but, that is also the extent of my interest in AI. I once tried it on my CV and I didn't recognize myself, so went screw this, my own words describe me better

The BBC article is long and covers a number of areas. Like who owns the results of AI ouput (including some rulings from the US legal system, stating that nobody owns something generated by AI).

It made me think back to recent movies featuring dead actors recreated by AI. Does the movie company own the likeness of an actor 500 years after their death too? What happens to a poor sucker born in 200 years time, who just happens to look like their ancestor? If your homework was solved by AI, was it your solution or those who trained the AI? Do we need humans? Luckily I am old enough to not care, but I figure the last question could be relevant for younger people grin

Edit: Some experiences with AI shows it lies blatantly and doubles down on lies. Sounding very confident in its own words too while being obviously wrong. Then there is the whole hate speech and prejudice issue. If AI is the sum of human writing (the internet), who is to blame for obvious bias and racism in AI generated output?

This is an interesting development but it doesn't address the main issue with AI generated content and how that undermines real human creativity and then the financial reparations

Its "wrong " to use AI to create something like art that replaces humans, thats the real issue for me and your link mentions this

"It definitely forecloses the most dystopian outcome of machines entirely replacing humans [in the world of art and entertainment]," says Stacey Dogan, a professor who studies intellectual property, competition and technology at the Boston University School of Law.

Some predict a future where you just plop down in front of an AI system instead of watching the work of human beings. But without copyright protection for AI-generated work, the business case for building that world takes a major hit. For big entertainment companies like Disney, there's still a huge financial incentive to let humans run the creative process."

This monkey photo is a real photo, it just wasnt taken by a human so this whole legal debate is really about copyrights and AI generated content

And I dont want AI content to be allowed to get copyrights

"Abashed the devil stood and felt how awful goodness is and saw Virtue in her shape how lovely: and pined his loss”

John Milton 

"We don't stop playing because we grow old; we grow old because we stop playing.” -  George Bernard Shaw

"What counts in life is not the mere fact that we have lived. It is what difference we have made to the lives of others that will determine the significance of the life we lead" - Nelson Mandela

 

 

I'd give the monkey copyright (at least it did something for itself rather thsn steal the work of millions of people, reconstitute it and say "here's your slop") before I gave an LLM, the owner of the LLM, or the 'prompt artist' copyright.

Also, from the article-

Thaler believes Dabus and similarly powerful AI systems are conscious.

There are far too many people who are anthropomorphizing these LLMs. It's unhealthy.

I cannot - yet I must. How do you calculate that? At what point on the graph do "must" and "cannot" meet? Yet I must - but I cannot! ~ Ro-Man

9 hours ago, Amentep said:

Let's be clear, LLMs aren't AI as they don't 'know' or 'think' and they only exist throught theft of peoples' hard work.

giphy.gif

"because they filled mommy with enough mythic power to become a demi-god" - KP

  • Author
30 minutes ago, Amentep said:

I'd give the monkey copyright (at least it did something for itself rather thsn steal the work of millions of people, reconstitute it and say "here's your slop") before I gave an LLM, the owner of the LLM, or the 'prompt artist' copyright.

Also, from the article-

There are far too many people who are anthropomorphizing these LLMs. It's unhealthy.

This

The whole idea of "copyright" iirc, was introduced to protect original work. Not copies of original work. This is of course another can of worms to kick over. When is something original? Wasn't there a saying once, that there are only 7 stories in the world, everything else is derivatives of those stories. When does a derivative deviate enough to not be considered derivative anymore? This deviant is curious 🤔

I find AI fascinating from a technical standpoint. Ever since working with Expert Systems and Neural Network at the university in the 90's. Before that, in the 1980's, my Commodore 64, with the assistance a great textbook purchased in Germany, gave me my first lessons in the sheer computational demands of anything more advanced than Eliza for the c64

Since the c64 didn't have internet, the only "training" for simple neural networks were questions and answers (feedback response) by the human operator. A simplified "You" if you want. LLM's harvest everything, so it will get all the worst of humanity too. Test proves that AI are both racist, bigot, hateful and homicidal if left to its own devices. Grok is an example of where unsupervised AI can go. I had a strategy game once, where the opponent was AI controlled. Literally. It almost killed the PC, fans going like turboprops, taking forever. At least in the beginning. As the neural network got trained, it started moving faster and played a mean game. But it was still requiring a lot of energy.

“He who joyfully marches to music in rank and file has already earned my contempt. He has been given a large brain by mistake, since for him the spinal cord would surely suffice.” - Albert Einstein
 

I mostly kinda hate AI, but I also use it all the time. The main issue I have is that it doesn't know when it's wrong, and when it moves out on thin ice - an area where the training data is particularly weak, it's wrong all the time.

Whenever I see ai art, like the recent craze of here is my photo now draw me like Myiazaki, it feels sacrilegious. We know exactly what he thinks about all of this, and also that the whole of Sen to Chihiro was drawn by hand. I still use it to screw around and to see what it can do, but it doesn't excite me in the moment like drawing by hand does.

What Ai would you use to read a EULA, or all the fine print you never read when you sign up for a gym membership or similar. I'm thinking one on your phone with text recognition would be very useful. Unless it's to read the EULA on your phone I suppose.

Na na  na na  na na  ...

greg358 from Darksouls 3 PVP is a CHEATER.

That is all.

 

I've signed on to a months free Gemini ai and asked it to remind me to cancel the day before I'm charged. We'll see where it's loyalties lie.

Na na  na na  na na  ...

greg358 from Darksouls 3 PVP is a CHEATER.

That is all.

 

On 4/17/2026 at 6:16 PM, Totally not Gorgon said:

I mostly kinda hate AI, but I also use it all the time. The main issue I have is that it doesn't know when it's wrong, and when it moves out on thin ice - an area where the training data is particularly weak, it's wrong all the time.

Whenever I see ai art, like the recent craze of here is my photo now draw me like Myiazaki, it feels sacrilegious. We know exactly what he thinks about all of this, and also that the whole of Sen to Chihiro was drawn by hand. I still use it to screw around and to see what it can do, but it doesn't excite me in the moment like drawing by hand does.

I use AI chatbots for my studies and to confirm things I already more or less know but I have forgotten the specific details. And I use Copilot primarily

So for example, what was the final score in the 1995 rugby world cup

It tends to be mostly accurate

But I dont use it for important things I know nothing about, I will use credible websites for that

And I never use it for political, ideological or societal feedback. Thats when the understandable issues and bias with it become the most obvious

"Abashed the devil stood and felt how awful goodness is and saw Virtue in her shape how lovely: and pined his loss”

John Milton 

"We don't stop playing because we grow old; we grow old because we stop playing.” -  George Bernard Shaw

"What counts in life is not the mere fact that we have lived. It is what difference we have made to the lives of others that will determine the significance of the life we lead" - Nelson Mandela

 

 

1 hour ago, BruceVC said:

So for example, what was the final score in the 1995 rugby world cup

It tends to be mostly accurate

You could also get a mostly accurate result looking it up on Wikipedia, which would be almost as fast and would not require you to use the environment damaging plagiarism machine...

I cannot - yet I must. How do you calculate that? At what point on the graph do "must" and "cannot" meet? Yet I must - but I cannot! ~ Ro-Man

On 4/17/2026 at 10:40 AM, Gorth said:

This

The whole idea of "copyright" iirc, was introduced to protect original work. Not copies of original work. This is of course another can of worms to kick over. When is something original? Wasn't there a saying once, that there are only 7 stories in the world, everything else is derivatives of those stories. When does a derivative deviate enough to not be considered derivative anymore? This deviant is curious 🤔

I find AI fascinating from a technical standpoint. Ever since working with Expert Systems and Neural Network at the university in the 90's. Before that, in the 1980's, my Commodore 64, with the assistance a great textbook purchased in Germany, gave me my first lessons in the sheer computational demands of anything more advanced than Eliza for the c64

Since the c64 didn't have internet, the only "training" for simple neural networks were questions and answers (feedback response) by the human operator. A simplified "You" if you want. LLM's harvest everything, so it will get all the worst of humanity too. Test proves that AI are both racist, bigot, hateful and homicidal if left to its own devices. Grok is an example of where unsupervised AI can go. I had a strategy game once, where the opponent was AI controlled. Literally. It almost killed the PC, fans going like turboprops, taking forever. At least in the beginning. As the neural network got trained, it started moving faster and played a mean game. But it was still requiring a lot of energy.

It is the "garbage in, garbage out" situation - LLMs are tools and depend on the ones who make and use it, so Grok reflects Musk and the current Xitter population. Structured, high-quality data and reinforcement learning should provide better results, but require effort (time, funding, etc.).

The energy consumption and e-waste are issues, though. Hence my general dislike of the corporate-owned models, despite their current affordability (also that the affordability can be easily taken away). One could consider the open-source software as an example of people cooperating, but I am unsure if it can work for the training and hosting.

23 hours ago, Gorgon said:

What Ai would you use to read a EULA, or all the fine print you never read when you sign up for a gym membership or similar. I'm thinking one on your phone with text recognition would be very useful. Unless it's to read the EULA on your phone I suppose.

It is a good use-case. Alternatively, forcing companies to provide the contractual terms in the layperson-readable format could achieve the same result (some already do it). I know I am not paying a lawyer to check a random EULA for me, though I try to read them briefly (some are interesting, the software ones usually are "the software might not work, we will change it however we want, and you will not sue us").

On 4/17/2026 at 1:13 AM, Amentep said:

Ooookay...apologies in advance, because this is probably going to seem unnaturally harsh, but I wish people who claim to love artists would follow that supposed love into not supporting LLMs that steal their work with no compensation all so people can churn out slop that looks vaguely like their work, thus devaluing the work of the actual artist.

Let's be clear, LLMs aren't AI as they don't 'know' or 'think' and they only exist throught theft of peoples' hard work. That's not getting into the environmental, electrical grid or quality of life if you live near one issues which are all significant.

Right now, there is not, in my opinion, an ethical way to engage with these commercial LLMs.

On the last point, I'm not sure you have to accept moral debt because of energy use. That's kinda like saying don't vacuum, use a broom or don't travel by air without doing a carbon offset. It's logically correct, sure, it's just... That's too much to worry about for one individual, these are problems facing society that have to be engaged systemically.

The only one of those examples that is near equivalent is the air travel one since most air travel is not really necessary, it's convenient. 'AI' isn't really necessary either it's just convenient (and makes some people a lot of money). It's probably closest to something like littering, maybe: sure, that one piece of plastic wrap you dropped won't make a difference but it certainly would if everyone starts doing it.

(On a narrow economic basis:

The trouble with 'AI' energy wise is that it already uses the same- maybe more, now- energy as the UK. That's not much less than a year/year increase either. While no tears should be shed for Gamers crying about Jensen making it uneconomic for them to buy dual 5090 Titans any more all those AI cards go into data centres and are run 24/7/365 at 1kW a pop. That mounts up, rapidly. To use a vaccum cleaner comparison it is the equivalent of running 8 low model Dyson's simultaneously, and in perpetuity, per card.

Also buying up all the RAM effects people who do need computers, like for education or just because their computer broke. They now have to pay more, same as they do for the electricity going into those 8 Dyson equivalents, in a rack, in a hectare sized site. With 20 employees.

Plus of course all the economic consequences of people getting sacked so 'AI' can take their jobs. Which really ought to be the big worry. All those people not contributing to taxes, being on welfare, not buying stuff because they have no money etc. It's the last one that is really the kicker, because that will put non 'AI' service economy stuff out of business. Which will all look nice on a corporation's balance sheet, right up until it doesn't any more.

And all because you decided to use AI to set a reminder rather than just putting it on your calendar. Shaking my smh my head)

9 hours ago, Zoraptor said:

The only one of those examples that is near equivalent is the air travel one since most air travel is not really necessary, it's convenient. 'AI' isn't really necessary either it's just convenient (and makes some people a lot of money). It's probably closest to something like littering, maybe: sure, that one piece of plastic wrap you dropped won't make a difference but it certainly would if everyone starts doing it.

(On a narrow economic basis:

The trouble with 'AI' energy wise is that it already uses the same- maybe more, now- energy as the UK. That's not much less than a year/year increase either. While no tears should be shed for Gamers crying about Jensen making it uneconomic for them to buy dual 5090 Titans any more all those AI cards go into data centres and are run 24/7/365 at 1kW a pop. That mounts up, rapidly. To use a vaccum cleaner comparison it is the equivalent of running 8 low model Dyson's simultaneously, and in perpetuity, per card.

Also buying up all the RAM effects people who do need computers, like for education or just because their computer broke. They now have to pay more, same as they do for the electricity going into those 8 Dyson equivalents, in a rack, in a hectare sized site. With 20 employees.

Plus of course all the economic consequences of people getting sacked so 'AI' can take their jobs. Which really ought to be the big worry. All those people not contributing to taxes, being on welfare, not buying stuff because they have no money etc. It's the last one that is really the kicker, because that will put non 'AI' service economy stuff out of business. Which will all look nice on a corporation's balance sheet, right up until it doesn't any more.

And all because you decided to use AI to set a reminder rather than just putting it on your calendar. Shaking my smh my head)

I would like to start with that the following are personal opinions, observations, and anecdotes and not a scientific study (alas, no data and not enough inclination for that).

I can see the point in regard to the energy consumption and I also find it quite irritating that our social group, Gamers, seek and encourage the higher use of energy on something as frivolous as graphical fluff (may UE5 be sunsetted).

Regarding the necessity, it is very relative. I do not have mobility impairments and can use a broom instead of a vacuum cleaner (I believe the animal companion prefers less noise), and someone whose job and source of income is cleaning would go for the more "human energy"-efficient option. In the case of LLMs, a use case I've seen is job search, a very generic activity with a large amount of text on it. One of the people I know tried to use the free (government-funded) employment assistance services. The meatbags there were nigh useless and apparently could not parse the person's educational background and previous employment, while the positions suggested could have been just randomly pulled from a pool. On the other hand, the chatbot was able to provide the job titles for the desired career direction, what to watch out for in the adverts, how to format the CV, and how to pace the search, so it could be done alongside the ongoing employment without burning out. The LLM also was available at any time and provided responses and feedback promptly.

Some people might prefer the LLMs as the pair programmer or a study partner for the same reasons - availability, flexibility, and general familiarity with the relevant field. Granted, they are/should be aware of the possibility of hallucinations and the necessity to check sources.

Regarding taxation, at the moment, I would like to see how it is going to go. It is possible to tax the corporations (unlikely may it be) and the "agentic" AI is not able to do most jobs fully (even 2D artists'). And institutional knowledge is a thing that can easily get lost in the layoffs. So, I agree that the lack of employment due to the CEOs' lack of foresight and professional skills is a threat to the livelihoods of their employees and can negatively affect the companies and the end-users in the long run. The most recent case I am aware of is PinkNews going for "reporter-free newsroom" (the CEO is a dumb ****, so expected as much).

So, the point being, there are areas where humans perform worse than the genAI, the necessity is relative, and the human CEOs not being concerned with the long-term prospects of their companies or the societal outcome of their decisions is an issue.

The not mentioned issue with the LLMs and image generation being widely available is that malicious actors can use them as well, whether it is spear-phishing, various photo editing, or hate speech at scale.

At what point an undesirable side effect becomes an inherent feature I cannot tell.

5 hours ago, Hawke64 said:

Some people might prefer the LLMs as the pair programmer or a study partner for the same reasons - availability, flexibility, and general familiarity with the relevant field. Granted, they are/should be aware of the possibility of hallucinations and the necessity to check sources.

But that's the problem, LLM's have no "general familiarity" with the field. They also don't have hallucinations. They can't think, they are not reasoning programs, they don't 'know' anything. It has a large data set that an complex program uses to try to determine what the most likely response is to what you are asking and provide it. I wouldn't trust it to do anything; the 'hallucinations' (which is part of the LLM industries attempts to sell their product as a thinking machine rather than admit that this is not 'true AI' as most laypeople would think an AI should be) is just its predictive model being wildly off base (or using incorrect answers scraped from the depths of Reddit) and outputting incorrect statements which, if taken as logical human style thinking, can have, and has had, disastrous outcomes.

I cannot - yet I must. How do you calculate that? At what point on the graph do "must" and "cannot" meet? Yet I must - but I cannot! ~ Ro-Man

Create an account or sign in to comment

Account

Navigation

Search

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.