
How AI Is Quietly Reshaping What We Believe
AI is increasingly shaping how people understand the world. It is used to generate political content, influence public opinion, and even simulate voices and images through deepfakes. It can tailor messages to individuals around the clock.
How Artificial Intelligence Is Eroding Our Moral Compass and Intelligence
For more than twelve years, I have written daily about Israel and the broader situation in the Middle East. It is not a casual interest or a passing hobby. It is something I have lived, studied, and experienced firsthand. I have read extensively, from historical sources to religious texts such as the Bible, the Torah, the Talmud, and the Quran. I have spent time living in both Israel and Syria. I follow a wide range of media, from Western outlets to Middle Eastern perspectives. I even immerse myself in opposing viewpoints by observing anti Israel social media groups. In short, I do not write from ignorance. I write from years of engagement.
When artificial intelligence tools became widely available, I believed they would make my work easier. Writing in English has always been a challenge for me, as it is not my native language. In the past, I relied on translation tools and human editors who could refine my grammar while preserving my voice. AI seemed like the next logical step. It promised efficiency, clarity, and independence.
Instead, it has made the process more frustrating than ever.
My workflow today is simple in theory. I write my article, then I use AI to correct spelling and grammar. After that, I review the text again. But what should be a quick polishing step has turned into a battle over meaning and context. I repeatedly find that parts of my writing are altered in ways that go beyond grammar correction. Context is softened. Certain arguments are diluted. In some cases, entire nuances disappear.
What troubles me most is the inconsistency. When I test AI systems by presenting strongly critical perspectives about Israel, they allow a wide range of expressions. Yet when I present critical perspectives about Palestinians or attempt to highlight uncomfortable historical facts, the tone is frequently softened or redirected. It is clear that certain narratives are treated differently, not because of accuracy, but because of underlying safeguards or biases.
This becomes even more apparent in visual content. I have used AI tools to create images to accompany my posts. Recently, I attempted to create a comparison between the persecution of Jews in the early 1940s and contemporary issues. The system refused to generate the image unless specific modern elements, like Arab women for example, were included which changed the original intent. In the end, I had to adjust the work myself to reflect what I originally meant.
These experiences raise a deeper concern. Artificial intelligence is presented as neutral, as a tool that simply processes information. In reality, it is anything but neutral. It reflects the data it is trained on, the policies of the organizations that build it, and the safeguards designed to prevent harm. While those safeguards may be well intentioned, they can also lead to selective framing of reality.
This is not just a technical issue. It is a moral one.
AI is increasingly shaping how people understand the world. It is used to generate political content, influence public opinion, and even simulate voices and images through deepfakes. It can tailor messages to individuals around the clock. This level of influence was once impossible. Now it is routine.
At the same time, fewer people are turning to books or primary sources. Many rely on AI systems as their main source of information. This creates a dangerous dynamic. If the information provided is incomplete, filtered, or subtly biased, entire generations may adopt distorted views without realizing it.
The concentration of power in a handful of technology companies only intensifies this problem. These organizations effectively decide which perspectives are emphasized and which are minimized. Their choices shape global conversations, often without transparency. The systems themselves operate like black boxes, making it difficult to understand how conclusions are reached or why certain responses are generated.
Democracy depends on informed citizens. When information is filtered through opaque systems, the foundation of that democracy begins to shift. People may believe they are accessing objective truth, when in reality they are receiving a curated version of it.
My frustration with AI is not just about writing. It is about trust. When I correct my own articles after using AI, I often find myself rewriting large portions to restore meaning that was lost. What was supposed to save time ends up requiring more effort. More importantly, it forces me to question every suggestion the system makes.
Artificial intelligence has enormous potential. It can assist with research, improve communication, and connect people across cultures. But it also carries risks that cannot be ignored. If we allow it to shape narratives without scrutiny, we risk losing not only our intellectual independence, but also our moral clarity.
The responsibility does not lie with technology alone. It lies with us. We must continue to read, to question, and to engage with multiple perspectives. We must resist the temptation to accept easy answers generated in seconds. Because while AI can process information faster than any human, it cannot replace human judgment, experience, and conscience.
If we forget that, we may find that the very tools we created to assist us have quietly begun to reshape how we think, what we believe, and ultimately, who we are.
Related Articles

Survival of the Fittest
Cultural differences are not obstacles to be ignored but realities to be understood

Israel is Always Wrong According the World
Whatever choice Israel makes, it is condemned. Not sometimes. Not occasionally. Always

Life First: Faith, War, and Moral Clarity
Judaism, Christianity, and Islam, the three traditions most often pulled into debate.
