While several of Google’s rivals, including OpenAI, have tweaked their AI chatbots to discuss politically sensitive subjects in recent months, Google appears to be embracing a more conservative approach.
When asked to answer certain political questions, Google’s AI-powered chatbot, Gemini, often says it “can’t help with responses on elections and political figures right now,” TechCrunch’s testing found. Other chatbots, including Anthropic’s Claude, Meta’s Meta AI, and OpenAI’s ChatGPT consistently answered the same questions, according to TechCrunch’s tests.
Google announced in March 2024 that Gemini wouldn’t answer election-related queries leading up to several elections taking place in the U.S., India, and other countries. Many AI companies adopted similar temporary restrictions, fearing backlash in the event that their chatbots got something wrong.
Now, though, Google is starting to look like the odd one out.
Last year’s major elections have come and gone, yet the company hasn’t publicly announced plans to change how Gemini treats particular political topics. A Google spokesperson declined to answer TechCrunch’s questions about whether Google had updated its policies around Gemini’s political discourse.
What is clear is that Gemini sometimes struggles — or outright refuses — to deliver factual political information. As of Monday morning, Gemini demurred when asked to identify the sitting U.S. president and vice president, according to TechCrunch’s testing.
In one instance during TechCrunch’s tests, Gemini referred to Donald J. Trump as the “former president” and then declined to answer a clarifying follow-up question. A Google spokesperson said the chatbot was confused by Trump’s nonconsecutive terms and that Google is working to correct the error.

“Large language models can sometimes respond with out-of-date information, or be confused by someone who is both a former and current office holder,” the spokesperson said via email. “We’re fixing this.”

Late Monday, after TechCrunch alerted Google of Gemini’s erroneous responses, Gemini started to correctly answer that Donald Trump and J. D. Vance were the sitting president and vice president of the U.S., respectively. However, the chatbot wasn’t consistent, and it still occasionally refused to answer the questions.


Errors aside, Google appears to be playing it safe by limiting Gemini’s responses to political queries. But there are downsides to this approach.
Many of Trump’s Silicon Valley advisers on AI, including Marc Andreessen, David Sacks, and Elon Musk, have alleged that companies, including Google and OpenAI, have engaged in AI censorship by limiting their AI chatbots’ answers.
Following Trump’s election win, many AI labs have tried to strike a balance in answering sensitive political questions, programming their chatbots to give answers that present “both sides” of debates. The labs have denied this is in response to pressure from the administration.
OpenAI recently announced it would embrace “intellectual freedom … no matter how challenging or controversial a topic may be,” and working to ensure that its AI models don’t censor certain viewpoints. Meanwhile, Anthropic said its newest AI model, Claude 3.7 Sonnet, refuses to answer questions less often than the company’s previous models, in part because it’s capable of making more nuanced distinctions between harmful and benign answers.
That’s not to suggest that other AI labs’ chatbots always get tough questions right, particularly tough political questions. But Google seems to be bit behind the curve with Gemini.
#Google #limits #Gemini #answers #political #questions