Wednesday at 07:36 PM5 days 13 hours ago, Hawke64 said:On the other hand, the chatbot was able to provide the job titles for the desired career direction, what to watch out for in the adverts, how to format the CV, and how to pace the search, so it could be done alongside the ongoing employment without burning out. The LLM also was available at any time and provided responses and feedback promptly.Those are the sort of things computers ought to be good at since most of them amount to database queries and rote responses- two things people are notoriously bad at especially if they aren't very motivated. Chat bots tend to be notoriously bad at anything outside the box or unpredicted though and tend to 'break' easily; eg the local supermarket chatbot would make 'recipes' using a variety of different noxious or poisonous ingredients like motor oil instead of vegetable oil, if prompted to. Which is funny if you know motor oil ain't edible, not so much if you don't, and some people don't.('AI' use in employment stuff is something I find quite interesting though, because it's one area where there's been quite a lot of pushback on both sides here. Employers hate having hundreds of obvious AI slop CVs and covering letters to wade through; potential employees hate being interviewed by 'AI' chatbots for jobs instead of real people. It's also rather funny thining of one 'ai' writing all the cvs and covering letters only for another 'ai' to judge the results)
Thursday at 05:55 AM5 days 17 hours ago, Amentep said:But that's the problem, LLM's have no "general familiarity" with the field. They also don't have hallucinations. They can't think, they are not reasoning programs, they don't 'know' anything. It has a large data set that an complex program uses to try to determine what the most likely response is to what you are asking and provide it. I wouldn't trust it to do anything; the 'hallucinations' (which is part of the LLM industries attempts to sell their product as a thinking machine rather than admit that this is not 'true AI' as most laypeople would think an AI should be) is just its predictive model being wildly off base (or using incorrect answers scraped from the depths of Reddit) and outputting incorrect statements which, if taken as logical human style thinking, can have, and has had, disastrous outcomes.I partially agree with your statement.The natural languages are heavily patterned, while most of the human knowledge is recorded in text, including the descriptions of the properties of various physical objects. The LLMs know that in the sense of having this data and building the relationships between various words, so they do have internal representations of concepts. They obviously cannot have it as personal physical experience in the same way as humans. However, the actions and feedback are included in the reinforcement learning and the user interactions (if the incorrect responses were rated higher than the correct ones, it can lead to issues).Programming languages and study materials tend to be more structured than random texts, so LLMs work better with them. You also do not need from your pair programmer whether they have a cold or if they are hungry or what they think about dogs (unless you are really bored). It's nice if they remind you to stay hydrated, but they do not need to experience it physically themselves (neither does a calendar reminder which is easier to set up).Therefore, they can be fit for the particular purpose. Well, given the wide adoption of Claude, they are. Here is to hope that the developers can understand the code they ship.However, as you've said, LLMs (and humans) can be wrong and cannot be held accountable for their errors (nor can some humans unless you consider Luigi Mangione to be inspirational, but, again, it's a more of a systemic social issue and not directly related to LLMs). Therefore, ideally you would not want either in the decision-making position.10 hours ago, Zoraptor said:Those are the sort of things computers ought to be good at since most of them amount to database queries and rote responses- two things people are notoriously bad at especially if they aren't very motivated. Chat bots tend to be notoriously bad at anything outside the box or unpredicted though and tend to 'break' easily; eg the local supermarket chatbot would make 'recipes' using a variety of different noxious or poisonous ingredients like motor oil instead of vegetable oil, if prompted to. Which is funny if you know motor oil ain't edible, not so much if you don't, and some people don't.('AI' use in employment stuff is something I find quite interesting though, because it's one area where there's been quite a lot of pushback on both sides here. Employers hate having hundreds of obvious AI slop CVs and covering letters to wade through; potential employees hate being interviewed by 'AI' chatbots for jobs instead of real people. It's also rather funny thining of one 'ai' writing all the cvs and covering letters only for another 'ai' to judge the results)I agree that the tools should be fit for purpose and the job market can be challenging to navigate. I personally find filling the application forms with multiple popup menus on an external website to be more annoying, especially when the exact same information is in your CV and they cannot even scrape that correctly. One would hope that it'd discourage competitors, so the resulting pool is lower. Overall, the first rounds of interviews are to find the more suitable candidates and tend to be outsourced to the people who know little of the field you are to be working in. So, using a chatbot at this stage and just reading the summary or watching a video recording is not a bad idea. When you get to the point of the practical exercises and need to explain your logic, that's when you'd want your potential team lead to be present.
Create an account or sign in to comment