I tried the same Ai and asked it to provide a list of 20 things, it only gave me 5. I asked for the rest and it also apologized and then provided the rest. It's weird that it stumbles at first but is able to see it's error and fix it. I wonder if it's a thing that it 'learned' from the data set. People not correctly answering prompts the first time.
Might be an intentional limitation to avoid issues like the "buffalo" incident with GPT3 (it would start leaking information it shouldn't after repeating a word too many times).
I personally don't think a large section of the population meets the requirement for general intelligence so I think it's a bit rich to expect the AI to do it as well.
It's weird though because they were able to point out they got to absurdity to its comment and it did agree. No it's not just algorithmic phrase matching, there is an actual "thought process" going on.
I've never been able to get an AI to explain its logic though which is a shame. I'm sure it would be useful to know why they come up with the answers they do.
I've never been able to get an AI to explain its logic though which is a shame. I'm sure it would be useful to know why they come up with the answers they do.
you and AI researchers both. it's probably a trillion-dollar problem at this point
they were able to point out they got to absurdity to its comment and it did agree. No it's not just algorithmic phrase matching, there is an actual "thought process" going on.
Or it just knows to say those words when someone says "are you sure?" or something similar.
But then it provided the correct answer so it's not just a rote response. If it was it would say no I am not sure, but then it wouldn't be able to provide the response.
You could test it on a correct answer. Ask a question, see if it gives a correct answer, then ask "are you sure?" to see what kind of response it gives. My guess is that you won't get an answer like "yes, I'm sure, that was the correct answer."