Written by Ido Kalev. This is a personal first hand account from his operational experience, shared here exactly as originally written.
A First-Hand Lesson in Aviation Security, Human Judgment, and the Limits of Intelligence. In aviation security, the most dangerous assumption is not that an attack will happen, but that if nothing happened, the system must have worked. This assumption is comforting. It is also wrong.
In late 2002, after two weeks of operational deployment in Kenya, I boarded Arkia Flight 582 from Mombasa to Tel Aviv. The flight was full of Israeli civilians returning home, families, couples, tour groups. From the outside, it looked like an ordinary charter flight ending an ordinary vacation season. It was not.
The Context That Preceded the Attack
To understand what happened that morning, one must understand that it did not begin that day. Four years earlier, in 1998, Al-Qaeda carried out simultaneous truck bomb attacks against the U.S. embassies in Nairobi, Kenya, and Dar es Salaam, Tanzania. The attacks followed a classic Trojan Horse delivery pattern, a vehicle resembling a routine service truck, arriving at a familiar time, on a familiar route, but carrying explosives instead of supplies. The result was catastrophic, more than 220 killed, thousands injured, and a clear signal that East Africa had become an operational theater for transnational terrorism. Those attacks were not an anomaly. They were a rehearsal.
Security Reality on the Ground
By 2002, aviation security in the region faced a structural problem. The red zones were vast. The terrain was open. The local security forces were under-resourced, under-trained, and in some cases disengaged. During pre-mission surveys around Moi International Airport in Mombasa, we identified vulnerabilities that could not be “secured” in the traditional sense. One village adjacent to the airport perimeter raised immediate concern. When we assessed the local military unit responsible for that sector, we found soldiers asleep at their posts. There was no realistic way to flood the area with forces. There was no second runway. There was no margin for error. At that point, security was no longer about manpower. It was about judgment. We classified threats by operational focus zones, red, orange, yellow, green, not as abstract risk levels, but as guidance for where human attention must concentrate. This was not theory. It was survival logic.
The Moment of the Attack
Shortly after takeoff, approximately ninety seconds into the climb, two shoulder-fired SA-7 ‘Strela’ missiles were launched at our aircraft. The first missed widely. The second passed closer, close enough to be unmistakable, but high enough to fail. Had the attackers waited seconds longer, allowing the missiles to lock onto the engines, the outcome would have been very different. They were not professional. We were lucky.
From the cockpit, the pilot calmly informed ground control that missiles had been fired. When asked if he was certain, his response was simple: “I know what missiles look like when they’re fired at you.” Minutes later, an Israeli Air Force F-16 joined our aircraft, close enough that passengers could see the pilot wave. Only then did the cabin fall silent. The flight continued to Israel.
What Followed on the Ground?
Upon landing, the scale of the incident became visible. Fire crews lined the runway. Medical teams stood by. Emergency services were fully deployed. Every passenger was escorted off the aircraft and received immediate psychological support. I was taken directly to debriefing.
Meanwhile, on the ground in Mombasa, the same terrorist cell detonated a vehicle-borne explosive at the Paradise Hotel, killing three Israelis and thirteen Kenyan staff members. Many of the Israeli passengers from our flight had already checked in earlier and were not present in the lobby at the time of the blast. The hotel attack and the missile launch were part of the same operation.
The Decision That Never Appears in the Report
There is one detail rarely mentioned in post-incident analyses, because nothing dramatic resulted from it. Prior to takeoff, we deliberately requested a change in the departure direction. It was not based on specific intelligence. It was based on terrain assessment, threat geometry, and the understanding that predictability is itself a vulnerability. The pilot, a former air force officer and a trusted professional, complied without hesitation.
That small deviation mattered. Shoulder-fired missiles rely on geometry, timing, and expectation. A predictable takeoff path simplifies targeting. A changed trajectory forces recalculation, often beyond the attacker’s capability. What would have happened had we followed routine? No one can say with certainty. And that is precisely the point. Proactive security is rarely proven by what it stops, but by the uncertainty it injects into the adversary’s plan. Most of the time, the decision that saves lives is the one that never makes headlines.
Why Human Judgment Mattered More Than Systems
There was no single system that “stopped” this attack. Local security did not prevent it. Intelligence did not provide a tactical warning in real time. Technology did not intercept the missiles. What prevented catastrophe was a chain of human decisions made before, during, and after the event, decisions based on experience, pattern recognition, and the willingness to act under uncertainty. This incident reinforced a lesson that technology alone cannot replace. Security systems can reduce risk. They cannot eliminate it. Only people can.
Technology as a Response – Not a Replacement
In the years that followed, significant investment was made in counter-MANPADS technology. Israel developed and operationalized airborne missile defense systems such as C-MUSIC (and later J-MUSIC for larger aircraft), designed to counter shoulder-fired threats. These systems work. They save lives. But they were born out of failure, not foresight. Technology did not prevent the attack. It reduced future probability, after human judgment had already carried the cost.
By 2009, these systems entered operational service. By 2018, I found myself responsible for their operational deployment, closing a personal and professional circle that began on that flight in 2002.
The Deeper Lesson for Aviation Security
Aviation security does not fail when missiles are fired. It fails when systems assume that sensors replace judgment, that intelligence replaces decision-making, and that routine is neutral. Routine is not neutral. Routine is information. Modern adversaries do not overpower systems. They study them. They exploit predictability, timing, and complacency, long before an alarm sounds.
That day in Mombasa taught me something no briefing ever could. Survival is rarely about having the perfect system. It is almost always about knowing where systems end, and humans must decide. Intelligence may initiate an operation. Technology may reduce risk. But it is the human operator, trained, empowered, and willing to act without certainty, who ultimately determines the outcome. And sometimes, the greatest success is measured not by what happened, but by what didn’t.



















