Skip Navigation
4 comments
  • I do not think that giving AI human rights is appropriate at this time. I also do not think it is a question that will need to be re-addressed anytime in the near future. We are in the beginning stages of a new technological breakthrough that are large language models. There is a lot of hype and uncertainty since we're in those beginning stages, but ultimately I suspect they are going to be really cool and capable clockwork machines. The question is just where the edge of their capabilities is.

    It is very difficult to get a LLM to do anything outside of the scope of what it was trained to do, so they do not really adapt to new environments. That is something that is an important facet of humanness as far as I can tell because that same adaptability is used in almost everything we do. Even if the LLMs were as advanced as a (say) cockroach in adapting to an environment, they would still be a long way off from hitting a threshold that would be appropriate for human rights considerations. And we aren't anywhere near cockroach level yet.

    One day in the future we may have to re-address the issue, but I will be very surprised if I'm alive to see it. Perhaps I'm wrong though :)

    • I also do not think it is a question that will need to be re-addressed anytime in the near future. One day in the future we may have to re-address the issue, but I will be very surprised if I’m alive to see it. Perhaps I’m wrong though :)

      I would content that you might likely be wrong (sort-of).

      It is very difficult to get a LLM to do anything outside of the scope of what it was trained to do, so they do not really adapt to new environments.

      You are not wrong here: Have AI Language Models Achieved Theory of Mind? <- "Not yet"

      Even if the LLMs were as advanced as a (say) cockroach in adapting to an environment, they would still be a long way off from hitting a threshold that would be appropriate for human rights considerations. And we aren’t anywhere near cockroach level yet.

      This is where you are (slightly) wrong, from my perspective.

      In cognitive science, brain cognitive processing signals are typically utilized to study human language processing. Therefore, it is natural to ask how well the text embeddings from LLMs align with the brain cognitive processing signals, and how training strategies affect the LLM-brain alignment? In this paper, we employ Representational Similarity Analysis (RSA) to measure the alignment between 23 mainstream LLMs and fMRI signals of the brain to evaluate how effectively LLMs simulate cognitive language processing.

      Experimental results reveal that expanding the size of pre-training data, scaling up models, and employing alignment training contribute to enhancing LLM-brain similarity, emphasizing the importance of high-quality SFT data in elevating LLM-brain similarity. Moreover, explicit prompts aids LLMs in understanding human intentions, and alignment training enhances the sensitivity to prompts. Notably, LLMs exhibit a stronger resemblance to humans in positive sentiment. The strong correlation between LLM-brain similarity and various LLM evaluations suggests that the proposed LLM-brain similarity possesses generalization and rationality, which could serve as a new way to evaluate LLMs from the cognitive perspective.

      Do Large Language Models Mirror Cognitive Language Processing?

      When you consider that neurons themselves are strikingly similar across the animal kingdom, it all begins to make sense. "You have the same basic building blocks for vertebrates and invertebrates," says Strausfeld, "and there are certain ways you can put these building blocks together [into brains]." So when it came to building a brain center like the hippocampus that can recognize places, there might have been only one way to wire those quirky neurons together to do the job — and evolution arrived at that same solution multiple times independently, just as the genetic instructions for wings evolved multiple times in distinct lineages.

      "Probably what consciousness requires," says Koch of Caltech, "is a sufficiently complicated system with massive feedback. Insects have that. If you look at the mushroom bodies, they're massively parallel and have feedback."

      Consciousness in a Cockroach

      I only had time to gather the notes that I need to make a reply, so I will reply again with something more than the just the notes :)

  • There's likely a point where rights for AI models should be considered; however current AI technology is no where near that point.

    That said: No.