As people come to rely on AI tools to answer questions, they will likely use those tools to answer political questions as well. The answers that the AI companies choose to provide, the author argues, may thus subtly but substantially influence public attitudes and, therefore, elections—especially to the extent Big Tech has been shifting from a “user sovereignty model,” in which tools (word processors, browsers, search engines) were intended to be faithful servants of the user, to a “public safety and social justice model,” in which tech tools (social media platforms, AI assistants) are designed in part to refuse to output certain answers that their creators think are dangerous or immoral. What should we think about that?
