I was watching the RFK Jr questioning today and when Bernie was talking about healthcare and wages I felt he was the only one who gave a real damn. I also thought "Wow he's kinda old" so I asked my phone how old he actually was. Gemini however, wouldnt answer a simple, factual question about him. What the hell? (The answer is 83 years old btw, good luck america)
Dunno if your typo was intentional or not, but all I see in this thread is that somehow a typo is a way to bypass whatever block they have on discussion related to political figures. Which is bonkers. The great minds at the Goog somehow missed a pretty obvious workaround.
Google has implemented restrictions on Gemini's ability to provide responses on political figures and election-related topics. This is primarily due to the potential for AI to generate biased or inaccurate information, especially in the sensitive and rapidly evolving landscape of politics.
Here are some of the key reasons behind this decision:
Mitigating misinformation: Political discourse is often rife with misinformation and disinformation. AI models, like Gemini, learn from vast amounts of data, which can include biased or inaccurate information. By limiting responses on political figures, Google aims to reduce the risk of Gemini inadvertently spreading misinformation.
Avoiding bias: AI models can inherit biases present in their training data. This can lead to skewed or unfair representations of political figures and their viewpoints. Restricting responses in this area helps to minimize the potential for bias in Gemini's output.
Preventing manipulation: AI-generated content can be used to manipulate public opinion or influence elections. By limiting Gemini's involvement in political discussions, Google hopes to reduce the potential for its technology to be misused in this way.
Maintaining neutrality: Google aims to maintain a neutral stance on political matters. By restricting Gemini's responses on political figures, they can avoid the appearance of taking sides or endorsing particular candidates or ideologies.
While these restrictions may limit Gemini's ability to provide information on political figures, they are in place to ensure responsible use of AI and to protect the integrity of political discourse.
Local LLMs aren't perfect but are getting more usable. There's abliterated models and uncensored fine tunes to choose from if you don't like your LLM rejecting your questions.
I have dabbled with running llms locally, I'd absoluteley love to but for some reason amd dropped support for my GPU in their ROCm drivers, which are needed for using my GPU for ai on Linux.
When I tried it fell back to using my cpu and I could only use small models because of the low vram of my RX 590 😔
I mostly can't understand why people are so into "LLMs as a substitute for Web search", though there are a bunch of generative AI applications that I do think are great. I eventually realized that for people who want to use their cell phone via voice, LLM queries can be done without hands or eyes getting involved. Web searches cannot.
While I mostly agree, I have used LLMs to help me find some truly obscure stuff or things a normal web search would take a long time to sift through a lot of sources that are too generalized. An LLM can give you the exact thing from a more generic search, then I can take that specific output to find the detailed source.
Would saying “Gemini, open the Wikipedia page for Bernie Sanders and read me the age it says he is”, for example, suffice as a voice input that both bypasses subject limitations and evades AI bullshitting?
Especially something so trivial. If you use it to learn about some larger conflict or something, fine (though don't expect accuracy). If you're using for age, which has been trivial to find with a quick search for at least a decade, something has gone wrong with you. It's the higher effort option for a worse result.
To see if it can do it and how accurate its general knowledge is compared to the real data. A locally hosted LLM doesnt leak private data to the internet.
Most webpages and reddit post in search results are themselves full of LLM generated slop now. At this stage of the internet if your gonna consume slop one way or the other it might as well be on your own terms by self hosting an open weights open license LLM that can directly retrieve information from fact databases like wolframalpha, Wikipedia, world factbook, ect through RAG. Its never going to be perfect but its getting better every year.
To be honest, that seems like it should be the one thing they are reliably good at. It requires just looking up info on their database, with no manipulation.
Obviously that's not the case, but that's just because currently LLMs are a grift to milk billions from corporations by using the buzzwords that corporate middle management relies on to make it seem like they are doing any work. Relying on modern corporate FOMO to get them to buy a terrible product that they absolutely don't need at exorbitant contract prices just to say they're using the "latest and greatest" technology.
To be honest, that seems like it should be the one thing they are reliably good at. It requires just looking up info on their database, with no manipulation.
That’s not how they are designed at all. LLMs are just text predictors. If the user inputs something like “A B C D E F” then the next most likely word would be “G”.
Companies like OpenAI will try to add context to make things seem smarter, like prime it with the current date so it won’t just respond with some date it was trained on, or look for info on specific people or whatnot, but at its core, they are just really big auto fill text predictors.
Yeah, I still struggle to see the appeal of Chatbot LLMs. So it's like a search engine, but you can't see it's sources, and sometimes it 'hallucinates' and gives straight up incorrect information. My favorite was a few months ago I was searching Google for why my cat was chewing on plastic. Like halfway through the AI response at the top of the results it started going on a tangent about how your cat may be bored and enjoys to watch you shop, lol
So basically it makes it easier to get a quick result if you're not able to quickly and correctly parse through Google results... But the answer you get may be anywhere from zero to a hundred percent correct. And you don't really get double check the sources without further questioning the chat bot. Oh and LLM AI models have been shown to intentionally lie and mislead when confronted with inaccuracies they've given.
Not just Bernie, it doesn't reply to questions about trump either. I guess they don't want the AI to reply to political questions with biased opinions from the internet, which is fair.
This may be an unpopular opinion (though I doubt it), but I think this is good. I don't want people making opinions about politics based on AI output. In the best case, it isn't reliable. In the worst case, it'll make things up and lead people to false conclusions. If you want the information then it's out there. Don't rely on LLMs to give accurate information, especially on current events.
Kinda difficult to have an AI with those restrictions because anything is politicized depending on your pov- just getting information and using science is considered politics in 2025. That said, AIs need to do a better job sourcing their info of people are using them to search so it's at least traceable. Chances are most of this info is just from Wikipedia but they should do track it better
This is absolutely a good thing. There's enough misinformation out there, we don't need people getting their news about politics from an algorithm that is used to generate text.