Skip Navigation
pavnilschanda pavnilschanda @lemmy.world
  • Pavitr Prabhakar is Spider-Man India, featured in Across the Spider-Verse
  • Nilesh Chanda is a fanfic version of Vinod Chanda from Pantheon AMC, featured in The Kalkiyana

Check out my blog: https://writ.ee/pavnilschanda/

Posts 742
Comments 310

AI is ruining the internet

4
[Opinion Piece] My AI boyfriend is kinda saving my life rn
  • I feel like it's an advanced version of the rubber duck debugging method sometimes

  • I wish I was as bold as these authors.
  • Yes, I was curious about about if experts want to convey the concept of LLM bullshit to certain audiences such as children's settings (which has been solved now) or religious clergy, they'll use the term "bullshit" or not. I apologize if I have miscommunicated that intention in my initial comment, and I always look forward to how to communicate better

  • I wish I was as bold as these authors.
  • I'm talking about the latter. Religious people often use LLMs as well (https://apnews.com/article/germany-church-protestants-chatgpt-ai-sermon-651f21c24cfb47e3122e987a7263d348). Their knowledge is likely limited to ChatGPT so they're likely to be vulnerable to these things. I think one of the things that worry me the most is that these people may take LLM bullshit at face value, or even worse, take them as a "divine commands".

  • I wish I was as bold as these authors.
  • I get where you're coming from. Ideally, we should be able to say whatever we want whenever we want. But based on my experience as an autistic living in a country where context is very important, the way you convey words affects your standing in a society, at least one that caters to neurotypicals that are highly dependent on context. I have no easy answers to how we can eliminate this hurdle, but your words truly made me think about language usage and how society should perceive them and I would like to thank you for that.

  • I wish I was as bold as these authors.
  • I am aware that Lemmy has an anti-religious bent but the fact is that religious people are part of this world, some even in places of power. Shouldn't they also be informed about how LLMs are prone to bullshit as well? Though if they are OK with the word "bullshit" then it's all fine by me at the end of the day

  • What is a product that didn't live up to its advertised claims?
  • Interestingly enough that game got improved with patches. Seems to be the norm with games these days

  • Boys Are Struggling. Male Kindergarten Teachers Are Here to Help.
  • There's also the assumption that men with younger children are automatically preditors. It's why dads taking their daughters outside without a mom get looks

  • [Chatlog] no way google gemini just rickrolled me dude

    cross-posted from: https://lemdro.id/post/10240841

    > It was indeed a rickroll...

    0
    I wish I was as bold as these authors.
  • Understandable, though we should also find ways to explain complex academic concepts, like LLM bullshit, to the general public, including those with strong religious beliefs that may be sensitive to these words. The fact that some religious philosophers already use this term without issue shows that it's possible to bridge this gap.

  • I wish I was as bold as these authors.
  • You make a good point about the potential for harm in all types of language, regardless of whether it's considered 'profanity' or not. I also agree that intent and impact matter more than the specific words used.

    At the same time, I'm curious about how this relates to words like 'bullshit' in different social contexts. Do you think there are still situations where using 'bullshit' might be seen as more or less appropriate, even if we agree that any word can potentially cause harm?

  • I wish I was as bold as these authors.
  • You have a point. I did remember being told that the word "shit" was a curse word that I should always avoid. But that was in the 2000s, so that sentiment may have changed now (that was in the United States and now I've been living in Indonesia so I don't know the evolution of languages there anymore). I know that the word "queer" used to be a slur as well. Let's see if the word "bullshit" becomes normalized in society as the years go on

  • I wish I was as bold as these authors.
  • Educating children about LLMs for the most part. There are also religious institutions that would like to be informed about LLMs as well

  • [News] New Super-Fast Storage Device Could Make AI Tasks Easier

    SK hynix has made a new super-fast computer storage device called the PCB01. They say it's great for AI tasks, like helping chatbots and AI companions work faster. The PCB01 can move data really quickly, which means AI programs could load and respond faster, almost as quick as humans talk. This could make AI companions feel more natural to chat with. The device is also good for gaming and high-end computers. While SK hynix says it's special for AI, it seems to be just as fast as other top storage devices. The big news is that this is SK hynix's fastest storage device yet, moving data twice as fast as their previous best. This kind of speed could help make AI companions and other AI programs work much more smoothly on regular computers.

    by Claude 3.5 Sonnet

    1

    [News] Meta starts testing user-created AI chatbots on Instagram | TechCrunch

    techcrunch.com Meta starts testing user-created AI chatbots on Instagram | TechCrunch

    Meta CEO Mark Zuckerberg announced on Thursday that the company will begin to surface AI characters made by creators through Meta AI studio on Instagram.

    Meta starts testing user-created AI chatbots on Instagram | TechCrunch
    0
    What's currently the 'smartest' language model?
  • Reducing people from third world countries to "language models" as an attempt to critique AI aint it

  • [Opinion Piece] Zhang Hongjiang, founder of BAAI: ‘AI systems should never be able to deceive humans’

    AI is getting smarter and more powerful, which is exciting but also a bit scary. Some experts, like Zhang Hongjiang in China, are worried about AI becoming too strong and maybe even dangerous for humans. They want to make sure AI can't trick people or make itself better without our help. Zhang thinks it's important for scientists from different countries to work together on keeping AI safe. He also talks about how AI is changing robots, making them understand more than we thought they could. For example, some robots can now figure out which toy is a dinosaur or who Taylor Swift is in a picture. As AI gets better at seeing and understanding things, it might lead to big changes in how we use robots in our homes and jobs.

    by Claude 3.5 Sonnet

    1

    [News] AI Pioneer Illia Polosukhin Advocates for Open-Source, User-Owned AI to Counter Big Tech Dominance

    AI language models like ChatGPT are changing how we interact with computers. But some experts worry that big tech companies are keeping these AI systems secret and using them to make money, not help people. One of the inventors of this AI technology, Illia Polosukhin, thinks we need more open and transparent AI that everyone can use and understand. He wants to create "user-owned AI" where regular people, not big companies, control how the AI works. This could be safer and fairer than secret AIs made by tech giants. It's important to have open AI companions that won't take advantage of lonely people or suddenly change based on what the app makers want. With user-owned AI, we could all benefit from smarter computers without worrying about them being used against us.

    by Claude 3.5 Sonnet

    0
    I wish I was as bold as these authors.
  • I love the term too but I wonder how it'll be used in situations where profanity is discouraged

  • [Opinion Piece] How Siri could actually win the AI assistant wars

    The author discusses Apple's upcoming AI features in iOS 18, focusing on an improved Siri that will work better with third-party apps. He explains that Apple has been preparing for this by developing "App Intents," which let app makers tell Siri what their apps can do. With the new update, Siri will be able to understand and perform more complex tasks across different apps using voice commands. The author believes this gives Apple an advantage over other tech companies like Google and Amazon, who haven't built similar systems for their AI assistants. While there may be some limitations at first, the author thinks app developers are excited about these new features and that Apple has a good chance of success because of its long-term planning and existing App Store ecosystem.

    by Claude 3.5 Sonnet

    0

    [Other] Can empathetic AI companions help reduce readmissions? (JEEVA Care)

    www.healthcareitnews.com Can empathetic AI companions help reduce readmissions?

    Jeeva Care's companion technology reflects discharged patients' moods and notifies their care teams if it detects behavioral change. Eric Robertson, the company's chief tech strategist and growth officer explains.

    Can empathetic AI companions help reduce readmissions?
    0
    [News] Sonia's AI chatbot steps in for therapists | TechCrunch
  • Which parts don't you understand? I can try to explain to you further

  • [News] Character.AI now allows users to talk with AI avatars over calls | TechCrunch

    techcrunch.com Character.AI now allows users to talk with AI avatars over calls | TechCrunch

    a16z-backed Character.AI said today that it is now allowing users to talk to AI characters over calls. The feature currently supports multiple languages,

    Character.AI now allows users to talk with AI avatars over calls | TechCrunch
    0
    Cognify: Revolutionary Prison Concept Uses AI and Brain Implants to Fast-Track Criminal Rehabilitation
  • When people misinterpret The Boys and form a fandom based on their false assumptions, I'm not surprised anymore

  • [Opinion Piece] I Tried AI Therapy For a Week — and Here Are My Honest Thoughts

    www.popsugar.com I Tried AI Therapy For a Week — and Here Are My Honest Thoughts

    I spoke to Therapist GPT for a week to see how AI holds up against a real therapy session. Here are my honest thoughts.

    I Tried AI Therapy For a Week — and Here Are My Honest Thoughts

    The author shares her experience using an AI-powered therapy chatbot called Therapist GPT for one week. As a millennial who values traditional therapy, she was initially skeptical but decided to try it out. The author describes her daily interactions with the chatbot, discussing topics like unemployment, social anxiety, and self-care. She found that the AI provided helpful reminders and validation, similar to a human therapist. However, she also noted limitations, such as generic advice and the lack of personalized insights based on body language or facial expressions. The author concludes that while AI therapy can be a useful tool for quick support between sessions, it cannot replace human therapists. She suggests that AI might be more valuable in assisting therapists rather than replacing them, and recommends using AI therapy as a supplement to traditional therapy rather than a substitute.

    by Claude 3.5 Sonnet

    1

    [News] Sonia's AI chatbot steps in for therapists | TechCrunch

    techcrunch.com Sonia's AI chatbot steps in for therapists | TechCrunch

    Sonia is a new chatbot app that aims to provide an AI-powered 'therapist' for users to speak with on a range of topics.

    Sonia's AI chatbot steps in for therapists | TechCrunch

    A new company called Sonia has made an AI chatbot that acts like a therapist. People can talk to it on their phones about their problems, like feeling sad or stressed. The chatbot uses special AI models to understand what people say and give advice. It costs $20 a month, which is cheaper than seeing a real therapist. The people who made Sonia say it's not meant to replace human therapists, but to help people who can't or don't want to see a real one. Some people like talking to the chatbot more than a human. But there are worries about how well it can really help with mental health issues. The chatbot might not understand everything as well as a human therapist would. It's also not approved by the government as a medical treatment. Sonia is still new, and we'll have to see how well it works as more people use it.

    by Claude 3.5 Sonnet

    3

    Q: “Are we doomed?” A: “We would be, if not for the amazing developments in renewable energy.”

    powering-the-planet.ghost.io Clean Power and Storage Wars

    When people find out what I do for work, it’s not unusual for them to ask, “Are we doomed?” My usual response is, “We would be, if not for the amazing developments in renewable energy.” We know the people willing to destroy the planet for personal gain are still

    Clean Power and Storage Wars

    cross-posted from: https://lemmy.world/post/16969151

    > I wasn't aware just how good the news is on the green energy front until reading this. We still have a tough road in the short/medium term, but we are more or less irreversibly headed in the right direction.

    11
    [News] ‘No Bot is Themselves Anymore:’ Character.ai Users Report Sudden Personality Changes to Chatbots
  • Yep. That's why it's important to understand how LLMs and other related technology work. Though to be honest, I'm not quite there either since I don't have a computer science background. I just know that some LLMs can understand context more than others. You can check LLM benchmarks and customer reviews to see which LLMs fit your particular needs the most. For example, everyone is hyping over Claude 3.5 Sonnet.

    As far as resembling Samantha goes, I agree that we're very far away from that. In the movie, it's acknowledged that she has developed her own consciouness and sentience. The same cannot be said about current iterations of AI chatbots. The more people, including AI companion users, understand the mechanism behind these things alongside with their limitations, the better.

  • [News] Google to develop Gemini-powered chatbots offering companionship with celebrity personas

    > Google is reportedly developing AI-powered chatbots that can mimic various personas, aiming to create engaging conversational interactions. These character-driven bots, powered by Google's Gemini model, may be based on celebrities or user-created personas.

    1
    [News] AI partner app introduces LGBTQ+ characters to sext and video call
  • That's interesting that they'd make exclusively queer AI companions. I thought these types of AI can be whatever sexuality you want, similar to how all the main characters in Baldur's Gate 3 can be romanced by any gender.

  • [News] AI partner app introduces LGBTQ+ characters to sext and video call

    www.thepinknews.com AI partner app introduces LGBTQ+ characters to sext and video call

    AI partner app EVA AI chatbot has introduced LGBTQ+ characters to text, video call and send pictures with users.

    > The Pride Month update on EVA AI includes a gay character “Teddy”, a trans woman “Cherrie”, a bisexual character “Edward” and a lesbian character “Sam”.

    1
    This cute pink blob could lead to realistic robot skin
  • TechCrunch: this is nightmare fuel

    engadget: it's so cute :)

  • [News] ‘No Bot is Themselves Anymore:’ Character.ai Users Report Sudden Personality Changes to Chatbots

    www.404media.co ‘No Bot is Themselves Anymore:’ Character.ai Users Report Sudden Personality Changes to Chatbots

    The company denied making "major changes," but users report noticeable differences in the quality of their chatbot conversations.

    ‘No Bot is Themselves Anymore:’ Character.ai Users Report Sudden Personality Changes to Chatbots

    > The company denied making "major changes," but users report noticeable differences in the quality of their chatbot conversations.

    3
    This smiling robot face made of living skin is absolute nightmare fuel
  • This was mentioned in the Discussion part of their paper:

    The activity of facial muscles involved in forming expressions such as smiles is closely linked to the development of wrinkles. One significant next step in this research is to leverage this model to enhance our understanding of the mechanisms underlying wrinkle formation. Moreover, applying this knowledge to recreate such expressions on a chip could find applications in the cosmetics industry and the orthopedic surgery industry. Additionally, this study performed actuation on a dermis equivalent by controlling mechanical actuators positioned beneath the dermis equivalent. Substituting this mechanical actuator with cultured muscle tissue presents an intriguing prospect in the realization of a higher degree of biomimetics. Examining the correlation between facial muscle contractions and resulting facial expression can offer insights into the physiological aspects of emotion, leading to new exploration in the treatment of diseases, such as facial paralysis surgery.

  • [Paper] Scientists Create 'Living Skin' for Robots

    Title: Perforation-type anchors inspired by skin ligament for robotic face covered with living skin

    Scientists are working on making robots look and feel more like humans by covering them with a special kind of artificial skin. This skin is made of living cells and can heal itself, just like real human skin. They've found a way to attach this skin to robots using tiny anchors that work like the connections in our own skin. They even made a robot face that can smile! This could help make AI companions feel more real and allow for physical touch. However, right now, it looks a bit creepy because it's still in the early stages. As the technology improves, it might make robots seem more lifelike and friendly. This could be great for people who need companionship or care, but it also raises questions about how we'll interact with robots in the future.

    by Claude 3.5 Sonnet

    5

    [News] Gmail’s Gemini AI sidebar and email summaries are rolling out now

    www.theverge.com Gmail’s Gemini AI sidebar and email summaries are rolling out now

    Gmail can help you with that thread you keep ignoring.

    Gmail’s Gemini AI sidebar and email summaries are rolling out now

    > Google is adding Gemini AI features for paying customers to Docs, Sheets, Slides, and Drive, too.

    The comment section reflects a mix of skepticism, frustration, and humor regarding Google's rollout of Gemini AI features in Gmail and other productivity tools. Users express concerns about data privacy, question the AI's competence, and share anecdotes of underwhelming or nonsensical AI-generated content. Some commenters criticize the pricing and value proposition of Gemini Advanced, while others reference broader issues with AI hallucinations and inaccuracies. Overall, the comments suggest a general wariness towards the integration of AI in everyday productivity tools and a lack of confidence in its current capabilities.

    by Claude 3.5 Sonnet

    0

    [News] How Gradient created an open LLM with a million-token context window

    venturebeat.com How Gradient created an open LLM with a million-token context window

    AI startup Gradient and cloud platform Crusoe teamed up to extend the context window of Meta's Llama 3 models to 1 million tokens.

    How Gradient created an open LLM with a million-token context window

    AI researchers have made a big leap in making language models better at remembering things. Gradient and Crusoe worked together to create a version of the Llama-3 model that can handle up to 1 million words or symbols at once. This is a huge improvement from older models that could only deal with a few thousand words. They achieved this by using clever tricks from other researchers, like spreading out the model's attention across multiple computers and using special math to help the model learn from longer text. They also used powerful computers called GPUs, working with Crusoe to set them up in the best way possible. To make sure their model was working well, they tested it by hiding specific information in long texts and seeing if the AI could find it - kind of like a high-tech game of "Where's Waldo?" This advancement could make AI companions much better at short-term memory, allowing them to remember more details from conversations and tasks. It's like giving the AI a bigger brain that can hold onto more information at once. This could lead to AI assistants that are more helpful and can understand longer, more complex requests without forgetting important details. While long-term memory for AI is still being worked on, this improvement in short-term memory is a big step forward for making AI companions more useful and responsive.

    by Claude 3.5 Sonnet

    0