क्या AI की गलती से ईरान में मारी गईं 160 बच्चियां? युद्ध में एल्गोरिद्म का खतरनाक सच – ai war mistake iran school attack 160 girls killed algorithm warfare analysis ttecm


What happened on 28 February 2026 in Minab, the southern city of Iran, shocked the whole world. America fired a missile at a girls’ primary school and more than 160 girls died.

This attack happened when classes were going on and hundreds of girl students aged between 7 to 12 years were present in the school. This incident is now being considered as one of the deadliest incidents of the Iran war of 2026.

But the biggest question is how did a school become a target? This raises three basic questions.

First question.. Was it a deliberate attack?

Second question.. Was this just a mistake made in the chaos of war?

Third question.. Is it being used in war? artifical Intelligence Did (AI) make any big mistake?

Whatever the answer may be, it is a matter of investigation, but here we will try to understand that AI can make this kind of mistake. For the first time in history, artificial intelligence has been used at this level in a war. Let us tell you that the US government has been signing agreements with companies like Anthropic and Open AI so that AI can be used on a large scale in war.

How Minab’s school was attacked

According to reports, the school that was targeted was Shajareh Tayyabeh Girls School. This attack occurred on the day when America and Israel launched massive air strikes on Iran. Iran claims that 168 to 180 people died in this attack, most of which were school girls.

US officials said they did not intentionally target any school and the matter is being investigated. Later some US officials also believed that it was possible that the attack may have been part of US military action, although the investigation has not been completed yet.

Investigation also revealed that the place where the school was located, once used to be the base of Iran’s Revolutionary Guard (IRGC). Many reports said that the attack was probably targeted at the same military complex which was very close to the school.

This is where the question arises…. If the target was a military base then how did the missile fall on the school?

Modern war and new role of AI

In today’s wars, it is not just humans who do the work of selecting targets. Armies like America and Israel use AI based targeting systems extensively. The job of these systems is to decide where to attack based on satellite images, drone feed, phone location and data analysis.

AI can analyze thousands of potential targets and decide in seconds which building or location could be a military base. This is the reason why many experts have now started calling modern war ‘algorithmic warfare’.

But the problem is that as fast as AI is, it can also be risky.

How can AI make fatal mistakes?

If we look at the Minab attack from a technical perspective, many possibilities emerge.

first possibility Has incorrect data or outdated information. If the AI ​​system has received information that a military base is still present in that area, then it can select targets on that basis. Whereas in reality a school had been built there many years ago.

second chances Is incorrect analysis of buildings around the target. Many times military bases and civilian buildings are in the same area. If the AI ​​algorithm considered the entire campus to be a military zone, the missile could fall directly on the school.

third possibility Over-reliance on automated targeting. In many modern weapons, after selecting the target, the final decision is taken by algorithms and automated systems instead of humans. If human checking is reduced in that process then the possibility of error increases.

Such questions have been raised before

Even during the Gaza war, Israel was accused of preparing a list of thousands of targets using the AI-based system Lavender. According to reports, questions were raised on the accuracy of that system and many times wrong targets were selected.

This is why experts have been warning for a long time that if AI is given complete freedom in war, then human control may weaken.

Is Minab’s tragedy a warning from AI?

The truth of what happened in Minab will emerge only after investigation. But it is clear that modern war is no longer just about soldiers and weapons. Algorithms, data and machine learning have also been included in this. It is inevitable for a machine to make mistakes and one mistake can be costly.

If there is even a slight mistake in the targeting system, or wrong data is fed, then the result can be the same as what happened in Minab… where the missile fell on the girls sitting in the class.

The use of AI in war is increasing rapidly, but incidents like Minab definitely raise the question whether it is safe to let machines decide life and death. The truth is that at present no one knows whether this attack on the school was done intentionally or because of a machine.

—- End —-



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *