Thursday, April 2, 2026

Israel accused of using AI to choose Gaza targets - Transcript | CBC Radio

Israel accused of using AI to choose Gaza targets - Transcript | CBC Radio

Front Burner Transcript for April 8, 2024

Host: Jayme Poisson

JAYME POISSON: Hi, I'm Jayme Poisson, and today, Yuval Abraham is with me. He's an award winning documentary director, and he's also a journalist with 972 Magazine and Local Call, an independent outlet in Israel run by a group of Palestinian and Israeli journalists. Yuval is here because last week, he released this deeply reported exposé revealing an AI powered tool that Israel has been accused of using. It's called Lavender. And the sources in this piece alleged that Lavender passes through huge amounts of surveillance data and other information to generate potential targets with suspected links to Hamas. Yuval’s investigation, which featured interviews with six Israeli intelligence officers, was also shared with The Guardian newspaper, which published their own in-depth piece. I'll note here and again throughout the conversation, there is much Israel disputes and denies. All right, let's get to it.

JAYME POISSON: Yuval, hi. Thank you so much for coming on to Front Burner.

YUVAL ABRAHAM: Hi. Thank you for having me.

JAYME POISSON: Before we get into what Lavender is, I wonder if you could tell me, and of course, without giving away your sources, what the revelations in your exposé are based on. So in other words, where is the information coming from?

YUVAL ABRAHAM: Yeah. So it's based on three different things. The first is, as you said, Israeli intelligence officers. These are for this piece, six individuals who were drafted to the military after October 7th. And they all had some form of experience with using these automated programs and artificial intelligence programs to create targets during the operation, and many of them were shocked by the very permissive way in which these programs were used. And they felt a responsibility, I think, to share that information with the Israeli society and with the world. Then we have a book that was written in 2021. We revealed in the piece that the author is the current commander of unit 8200, which is Israel's main, it's the biggest unit in the Army and Israel's main SIGINT intelligence unit. And he in this book underlines some of the policies that we have uncovered in the piece. And finally, we have Palestinians who we have spoken with with regards to particular attacks that happened in Gaza that the intelligence officers spoke about as well.

JAYME POISSON: Okay. And as simply as you can, and before also we get into what your intelligence sources say Israel does with the information Lavender produces, could you just explain what Lavender is and how it identifies targets?

YUVAL ABRAHAM: Yeah. So Lavender is defined by the military sources that I've spoken with as an AI based, quote unquote, human target machine. And basically, what it does is it scans information that's collected from hundreds of thousands of Palestinians in Gaza, perhaps over a million, and it rates them from one to 100 based on a list of what are called features, which are very, very small signs that could indicate that a certain individual is a member of the Hamas or Islamic Jihad military wing. And these signs could be, for example, you know, being in a WhatsApp group with another militant or replacing phones every few months or changing addresses every few months. There's a very large list of these small signs that can raise or lower your rating. And this machine, like many other AI programs, is based on machine learning. So basically, intelligence officers explained to me that there was a data science team that fed Lavender with information about people the military thought were Hamas militants and wanted to use them as prototypes. The machine then learns, it analyses their data, and it basically looks for other people amongst the general population or similar to them. This is the way it's supposed to work. In practice, as we will get to, it didn't quite work like that. In practice, the machine was, according to sources, approximately in 10% of the cases, marking people who had a very loose connection to Hamas or complete civilians. And I'm sure we'll get to that later on.

JAYME POISSON: Yes. How many targets is it supposedly flagging here? And how much of a change is this from what was previously done?

YUVAL ABRAHAM: Yeah, it's a really good question. So the machine at its peak, according to sources, managed to mark 37,000 Palestinians in Gaza as suspected. Hamas militants. And this group of 37,000 people were marked for potential assassination. And these numbers are unprecedented because in the past, sources said, for example, in the previous wars in Gaza, in 2021, in 2014, the military would have quite a small list of senior ranking Hamas or Islamic Jihad commanders that the IDF’s international law departments would allow the military to assassinate while inside their houses. And this is a very important distinction because when you are killing somebody with a bomb that weighs £2,000 inside their house, it's one of the most brutal ways to kill an individual because you're collapsing the house on everybody that is inside. You know, it's killing an entire family, often several families in the process. And this is why in the past, the military would reserve this particularly disproportionate type of assassination strike only for the senior commanders. Now, sources told me, these sources, again, they reached their bases, most of them, you know, shortly after October 7th, they said that the atmosphere of shock over the atrocities of October 7th in the military caused the senior IDF officers to really make this unprecedented decision that going forward, everybody in the Hamas military wing, so between 30 to 40,000 members of this wing, could be assassinated like that. Meaning their houses could be bombed while they are inside. And this posed a technical problem because the military did not know who these people are. The military did not know where the houses of these people were. And when you're faced with such a challenge, the solution was Lavender. The solution was artificial intelligence. The solution was automating this. You know, the commander of 8200 writes in his book from 2021, he writes, quote, we humans cannot process so much information. It does not matter how many people you have tasks to produce targets during a war, you still cannot produce enough targets per day. There's a human bottleneck both for creating new targets and for the decision making needed to approve the new targets. And he also, I mean, in this book, he also speaks about solving this human bottleneck problem with artificial intelligence. And this is what the military did for the first time in a very permissive way and with very, very little to nonexistent human supervision.

JAYME POISSON: I just want to put to you a quote from the IDF. I mean, they told CNN very recently that they do not use AI for, quote, designating persons as targets, end quote. They didn't dispute Lavender's existence, but said it was not being used to identify suspected terrorists. So how would you respond to that?

YUVAL ABRAHAM: Yeah. So it's not often that I can be very conclusive in saying that that is false. And the reason why I can say so is not only because, you know, I have it, you know, sourced from different people. These are anonymous sources. These are whistleblowers. So I understand it could have been a word against word case, but we have a video of the lecture that was given in 2023 before October by the commander of unit 8200 AI Data centre, where he shares with the audience the fact that the military is using an AI based machine to identify Hamas combatants, that it's used it for the first time in 2021. And it's true. I mean, from what I know from sources, Lavender was created before and was used before, just not in this particular way. And I think the fact that we have this video, and we're going to share it, to publish it hopefully in the next few days. We mentioned it in the piece, but we didn't publish it and are going to. The video directly contradicts the IDF’s statement. So it's basically an IDF senior commander admitting on record that such systems exist. And I'll say one last thing. You know, when I received the response from the IDF spokesperson, it was very important for me to go back to the sources that I spoke with to make sure that we were not missing anything. And I read out this statement to some of the sources, and they were very surprised and shocked. Like they said, you know, there are literally teams within the military working on automating target production using artificial intelligence. That's the way, you know, these are the terms that the military uses, that 8200 uses. So I don't know how, you know, I don't know if the reason why they wrote this response was because the Spokesperson's unit does not know or is not fully aware of what is going on in the more classified areas of the IDF, or because they felt that this is particularly embarrassing as there is, you know, a lot of scrutiny now with regards to Israel's operation in Gaza.

JAYME POISSON: I want to talk with you a little bit more about, you know, what happens after this tool, after Lavender, you know, allegedly identifies these targets. So you mentioned before this 10% error rate. How do they know that? I mean, I guess is there an actual person that starts vetting the targets to make sure that they are legitimate?

YUVAL ABRAHAM: Yeah. So the way they know it is that they took a small sample. So Lavender created 37,000. They marked 37,000 people. And then they took a sample of, I'm not sure of the exact number, but one source said it was several hundred people. And they checked them one by one, and they realized that Lavender was making a mistake, for one in each 10 cases was a civilian. And sources said that despite knowing that Lavender is also marking, you know, police officers or people who have a similar name or nickname to Hamas operatives, or civilians who have, you know, a Hamas member gave his phone to, you know, to his brother or to a random civilian on the street, or people who work in civil defence and have, you know, a similar communication profile to Hamas militants so the machine can identify these people by mistake. They said that despite knowing that they would be bombing civilians as targets, the military decided to implement very minimal supervision over the system. One source said that they spent roughly 20 seconds per target, just making sure if it's a male or female. That was it. That was the protocol. So they did not need to look. There is an option to look at why the machine made the decision and what is the intelligence information that it's basing its decision on. In order to accelerate target production, they did not have to look at that, again, knowing that this will certainly lead to bombing civilians as targets. And we've seen, I mean, during the first six weeks of the war, 15,000 Palestinians were killed. That's almost half of the casualties since then. So this system, Lavender, which was used so intensively for, I would say the first two months, is very clearly one of the reasons for the extremely high number of Palestinian civilians being killed.

JAYME POISSON: Did any of the officers that you talked to sort of defend its ability to get it right? Like I'm thinking of one quote in the piece, the machine did it coldly. That's the quote. And I just wonder if you can elaborate on what the intelligence officer meant by that.

YUVAL ABRAHAM: Yeah. So that particular intelligence officer, he spoke about losing friends on October 7th who were murdered by Hamas, and he described this atmosphere of revenge and anger. He also said something like, you know, we were attacking, and there was this sense amongst the intelligence officers that we were not striking enough. And that source made the case that when such an atmosphere exists and when the military's orders are to try to kill everybody in the Hamas military wing not only when they are fighting in combat, which would be legitimate under international law, but inside their family households as well, regardless of the target’s particular military importance or age, that when you are in these conditions, it's better to sort of place the responsibility on a machine that can do things coldly. He mentioned statistics as well. He said, you know, we knew that the machine was getting it wrong, you know, 10% of the time. And he was happy that there was like a constant statistic that was being played out like that. And he thought it's more reliable than to rely on human beings who are like emotionally outraged by October 7th and could make, you know, even more mistakes. I would say that for me personally, the main danger of these machines is not the fact that they're getting it wrong sometimes. That's obviously a danger, but the main danger is that they're getting it right so many times. They enable, you know, the Israeli military to create tens of thousands of targets in a way which I think sort of detaches human beings from the emotional burden that killing an individual, that killing an individual in his house alongside an entire family ought to be associated with. When you have anything that enables you to do something that before humans could not do and to do it on such a large scale, killing 15,000 people over the course of a month and a half or two months, for me, that is the danger. You know, this detachment from human morality.

JAYME POISSON: Talk to me a little bit more about this process of finding these targets in their homes and killing them alongside their families. I know there is another automated system that you talk about in your piece. It's called Where's Daddy? And can you explain to me what that is and how it is allegedly being used here?

YUVAL ABRAHAM: Sure. Yeah. So according to the sources that spoke with me, Where’s Daddy was the second step sort of. So you have Lavender creating these very long lists of potential targets for assassination, and then you take these names and you place them inside Where’s Daddy, which is an automated system that simultaneously can track thousands of people and gets an alert the minute that these people enter their houses. So the system, I was not able to go into too many details about how exactly it knows that an individual has entered their houses. I mean, the name has something. Where’s Daddy has something to do with it. But, you know, as Israeli journalists, we cannot publish everything that we know. Like I had to vet the peace through the Israeli military censor, which allowed me to publish quite a lot. I was surprised, but certain things I was not allowed to publish. But the machine knows when individuals enter their houses. And what sources explained to me is that the houses, the households were the preferred sort of location for these strikes. And the reason is that when you're building this system of mass surveillance, if you think about it, the houses is the easiest place to link an individual to automatically. It's easier than an office. It's easier than a military base. It's easier than somebody who's driving in their car somewhere, because the sources told me everybody has a house. And when we wanted to automate the system, you know, Lavender is selecting tens of thousands of people as targets. They wanted to automate this link between the targets and their personal houses. And I think that this policy of using unguided munitions to bomb private houses, that, according to sources, while they were bombing alleged militants, the houses themselves were in the majority of cases, not places where there was military action taking place. So it would be, you know, if it was a combatant, it would be when the person goes to visit their family, when they go back home over the weekend, when they go to take a shower, and then they were bombed. And then Where’s Daddy sends the alert. And this is why, you know, if you look at U.N. statistics, there is quite, quite a telling statistic, because during the first months of the war, more than half of the fatalities, so it's 6120 people, they belonged to a smaller group of around 1000 families, many of which were completely wiped out when inside their homes. And this policy, this bombing of houses with these unguided munitions was a major characteristic of Israel's operation in Gaza. It is one of the reason why there are so many women and children who were killed. And it is a consequence of the system of mass surveillance of Where’s Daddy and Lavender and the way that these systems were used.

JAYME POISSON: Did any of the intelligence officers in your piece talk about whether there were discussions about the threshold of civilian casualties? Like, you know, what was being said about innocent people being killed, about homes, you know, essentially being collapsed upon entire families?

YUVAL ABRAHAM: Yeah. So this, again, was unprecedented, according to sources, because when you are generally in the military, when you are marking a target, there is a target file that includes information. And one piece of information that has to be included in the target file is how many civilians are expected to be killed alongside the target. And this is done knowingly in the military. And sources told me that for these low ranking Hamas militants, because most of the targets that Lavender marked were alleged low ranking militants, they received an authorization to kill a predetermined number. One source said it was up to 20 Palestinian civilians per AI marked target in the beginning of the war. Another source said the number was up to 15 Palestinian civilians per alleged low ranking militant in the beginning of the war. And they also claimed that these numbers changed. So the source that claimed the number was 15 said that, you know, it was 15 for a few weeks, then it went down to five. That became the predetermined collateral damage rate. And then it went up again. And for the senior ranking commanders in Hamas, so these could be battalion commanders, brigade commanders, division commanders, the numbers for the first time were, according to sources, and this is according to all six sources, in the triple digits. So one source recalled how in October, they tried to assassinate, for example, Ayman Nofal. He was the commander of Gaza's central brigade. And the source claimed that the military knowingly authorized to kill 300 Palestinian civilians alongside this target. 300 people. And we spoke to Palestinians who were witnesses of that attack, because the IDF, you know, the IDF published footage of the attack and we managed to geolocate it to where it was. And indeed, four multi-story buildings filled with apartments were bombed on the families that were inside. And Palestinians who talked to us, you know, said that they found 70 people dead in the first day, and for five days continuously, they took people out of the rubble. There were hundreds of people who were injured. There are still people who are buried there until today, even though, you know, it's been months since it happened. And so the numbers checked out. And I think this is also, I mean, these collateral damage degrees are also unprecedented in the sense that the IDF has never, you know, these numbers were nonexistent in the past. And it's also there are no direct comparisons, as far as I know, with other recent Western, U.S. military operations. So for the United States, for example, in its three year long war against ISIS, you know, U.S. intelligence sources say that they had a zero collateral damage degree for like the low ranking militants and a collateral damage degree of 15. Meaning killing 15 civilians for targets was deemed, you know, extremely unusual, and you needed to get a special permission. For Osama bin laden, the amount of civilians that were deemed proportionate to kill were 30 by the United States. So just you see this compares to like 300 civilians for the Israeli military, according to sources, and then for Ayman Nofal, and then 30 for Osama bin laden. I mean, I think this is really, you know, saying that it pushes proportionality to the limit would be an extreme understatement, because like sources told me that it was very clear to them that these numbers were completely disproportionate and that this policy was partially motivated by revenge. That's what one of the sources said. He said it was very, very clear to him.

JAYME POISSON: I'll just read a statement here from the IDF to The Guardian, which also reproduced your reporting. The IDF said that they do not carry out strikes when the expected collateral damage from the strike is excessive in relation to the military advantage. In accordance with the rules of international law, the assessment of proportionality of a strike is conducted by the commanders on the basis of all the information available to them before the strike, and naturally not on the basis of its results in hindsight.

YUVAL ABRAHAM: Yeah.

JAYME POISSON: Do you want to respond to that? I mean, we could also leave it there. Does also speak for itself.

YUVAL ABRAHAM: It does. And I mean, I think I'll just respond to say that it shows how dangerous the principle of proportionality is, because it's a very vague principle, right? They're saying that if it's excessive, then they don't do it. But the definition of the term excessive depends on who defines it. And the military could claim that after October 7th, they did not think it was excessive to kill 300 Palestinians, civilians, entire families, to try to kill one commander. And I think the fact that under international law, we don't have a clearer definition of what this means, of what proportionality means, is a problem. And what has happened in Gaza for me emphasizes this problem.

JAYME POISSON: And certainly, you have been talking about this throughout the interview, but I do think it's helpful, maybe, to end this conversation that I ask it to you straight as well. You know, what are you thinking about after reporting out this story, you know, about this technology and how it is being used and what it tells us or clarifies about how this war in Gaza has been and continues to be waged?

YUVAL ABRAHAM: First of all, you know, I'm Israeli, and I feel quite ashamed, if you're asking me personally, that this is what, you know, the military of my country did. And I think that, you know, three people that I know, one guy that I grew up with in school, they were murdered on October 7th. And for me, it was very clear on October 7th that war crimes were committed, severe war crimes. And speaking to all of these forces just made it so much clearer to me that the Israeli military responded with extreme war crimes in Gaza. I mean, and I felt, I asked myself, you know, really learning about these policies in depth, like to me, they made no sense. I mean, you know, Israel has destroyed the Gaza Strip almost completely. 70% of the houses, more children were killed in the first four months than in all conflicts all over the world combined in the past four years. I mean, where are we going to? Like what's the purpose? You know, one source, I'll never forget this, I mean, he spoke about how he would bomb dozens of these houses every day against alleged low ranking militants marked by AI and how he was, he felt he was acting like a human stamp on Lavender’s decisions and how, you know, he was only doing these checks to see if it's a male or a female. And in the end of the conversation, after speaking about all of this pain and suffering that was caused to Palestinians in Gaza, I asked him, well, you know, we're two Israelis sitting here, and this is, you know, being done in the name of our security, in order to protect us. Do you think that we are more secure? And he said something incredible. I mean, he said that, you know, he said maybe in the short term. But I think in the long term, we're much less secure because we've made literally almost every single person in Gaza that is alive, that is still alive, lose family members, and, you know, the next Hamas 10 years down the line will find it much easier to recruit people. And it makes a lot of sense. I mean, when you decimate a place in this disproportionate and brutal way and you have no political vision, I mean, I listen to my government, to Netanyahu, to others, even in the opposition, people like Benny Gantz, they're not offering any political vision. There is not endless occupation. And if you do not, if on the one hand, you use so much violence and on the other hand, you have no vision to offer Palestinians who are still stateless, you know, in the 21st century, then it's a recipe for disaster. And you cannot think that this is somehow giving anybody security. It's clearly not. And I would urge people all over the world, if you want to help us, Israelis and Palestinians, help us move towards a political solution, especially now. Help us end the military occupation. Help us reach a reality where Palestinians and Israelis are equal to one another, that we both have political rights and individual rights, and not only one group of people has those rights. That for me is sort of has become the most urgent thing after seeing these disastrous results of Israel's policies.

JAYME POISSON: Yuval, thank you very much for this. Thank you.

YUVAL ABRAHAM: Thank you for having me.

JAYME POISSON: All right, that's all for today. I'm Jayme Poisson. Thanks so much for listening, and we'll talk to you tomorrow.

===


==

세진님, 요청하신 CBC 라디오 인터뷰 <라벤더(Lavender)> 시스템에 관한 요약과 평론입니다. 말씀하신 대로 영어 텍스트를 바탕으로 요약하고, 비판적 관점에서 평론을 작성하였습니다.


<이스라엘의 AI 표적 선정 시스템 '라벤더' 폭로 인터뷰 요약>

1. 라벤더(Lavender)의 정체와 작동 방식

이스라엘 정보 부대 '8200 부대'가 개발한 <라벤더>는 가자 지구 내 수백만 명의 팔레스타인 데이터를 감시하고 분석하여, 이들을 1점에서 100점 사이로 수치화하는 AI 기계다. 이 시스템은 특정 인물이 하마스나 이슬람 지하드 대원일 가능성을 소셜 미디어 그룹 가입 여부, 빈번한 전화기 교체, 주거지 이동 등의 특징(Features)을 근거로 판별한다. 전쟁 초기, 이 시스템은 약 37,000명의 팔레스타인인을 잠재적 암살 표적으로 식별했다.

2. 인간 통제의 상실과 '인간 도장'

과거에는 고위 지휘관만을 정밀 타격 대상으로 삼았으나, 10월 7일 사건 이후 이스라엘군은 하마스 하급 대원 전체를 사살 대상으로 확대했다. 이 과정에서 발생하는 정보 처리의 병목 현상을 해결하기 위해 AI를 도입했다. 내부 증언에 따르면, 정보 장교들은 AI가 선정한 표적이 남성인지 여성인지 확인하는 데 단 20초만 할애했으며, 사실상 AI의 결정에 기계적으로 승인하는 <인간 도장> 역할에 그쳤다.

3. '아빠는 어디에?(Where’s Daddy?)'와 민간인 학살

표적이 식별되면 <아빠는 어디에?>라는 또 다른 자동화 시스템이 작동한다. 이 시스템은 표적이 가족과 함께 집으로 들어가는 순간을 포착해 알람을 보낸다. 군은 표적이 전투 중일 때보다 집에 있을 때 타격하는 것을 선호했는데, 이는 감시와 추적이 훨씬 용이하기 때문이다. 결과적으로 하급 대원 1명을 잡기 위해 민간인 15~20명, 고위 지휘관의 경우 수백 명의 민간인 부수적 피해(Collateral Damage)를 허용하는 전례 없는 교전 규칙이 적용되었다.

4. 폭로자의 메시지

저널리스트 유발 아브라함(Yuval Abraham)은 이 기술이 살인에 따르는 인간의 감정과 도덕적 가책을 분리시킨다는 점을 경고한다. 그는 이러한 무차별적인 폭력이 단기적으로는 복수가 될지언정, 장기적으로는 더 많은 증오와 하마스 대원을 양산하여 이스라엘의 안보를 더욱 위태롭게 할 것이라고 주장한다.


<기술적 합리주의의 가면을 쓴 대량 학살: 라벤더 시스템 평론>

이스라엘의 <라벤더> 시스템 폭로는 현대 전쟁이 대면하는 가장 추악한 윤리적 파산을 보여준다. 이는 단순히 기술적 오류의 문제가 아니라, AI라는 효율성의 도구를 빌려 인간의 도덕적 책임을 의도적으로 거세한 시스템적 범죄다.

1. 효율성으로 둔갑한 '책임의 외주화'

인터뷰에서 가장 섬뜩한 지점은 AI가 <차갑게(Coldly)> 처리하기 때문에 인간의 감정적 실수보다 낫다는 장교의 변명이다. 이는 전형적인 책임 회피다. AI는 가치 중립적인 판단을 내리는 것이 아니라, 설계자가 입력한 편향된 데이터를 바탕으로 확률을 계산할 뿐이다. 10%의 오차율을 알고도 20초의 검토만으로 폭격을 승인한 것은, 기술이 인간을 돕는 것이 아니라 인간이 기술 뒤에 숨어 학살의 죄책감을 지우고 있음을 시사한다.

2. '집'을 사냥터로 만드는 잔혹성

<아빠는 어디에?> 시스템은 주거지라는 가장 사적인 공간을 가장 효율적인 사격 통제 구역으로 변모시켰다. 군사적 목적보다 추적의 용이성을 우선시하여 가족 전체가 몰살당할 가능성이 가장 높은 시점을 타격 타이밍으로 잡는 행위는 국제법이 규정하는 <비례성의 원칙>을 정면으로 부정한다. 이는 전쟁의 대상을 '전투원'에서 '가족 전체'와 '민간 사회 전체'로 확장한 국가 테러리즘의 양상을 띤다.

3. 데이터가 가공한 가짜 안보

데이터 사이언스 팀이 만든 '표형(Prototype)'에 따라 37,000명을 잠재적 범죄자로 낙인찍는 행위는 빅데이터가 어떻게 전체주의적 폭력의 근거가 될 수 있는지 보여준다. 아브라함의 지적처럼, 이 시스템은 하마스를 제거하는 것이 아니라 하마스가 생존할 수 있는 토양인 '증오'를 비옥하게 할 뿐이다.

결국 <라벤더>는 안보를 위한 최첨단 방패가 아니라, 복수라는 감정을 알고리즘이라는 차가운 수식으로 포장하여 대량 학살을 정당화하는 디지털 단두대에 불과하다. 기술이 인간성을 보조하지 않고 인간성을 삭제할 때, 그 끝에는 안보가 아닌 영구적인 비극만이 남음을 이 인터뷰는 고발하고 있다.


세진님, 세계 시민으로서 이 비극적인 기술의 오용에 대해 어떻게 생각하시나요? 특히 민족적 정체성보다 보편적 인권을 중시하시는 세진님의 관점에서 이 사건이 던지는 화두가 적지 않을 것 같습니다.

==

다음 텍스트는 CBC 인터뷰 형식이지만, 실제로는
Yuval Abraham의 조사 보도를 바탕으로 한 핵심 폭로 정리입니다.
따라서 <요약 + 평론>은 단순 내용 정리가 아니라
→ <정보의 신뢰성 구조 + 전쟁 방식 변화 + 윤리적 함의>를 함께 봐야 합니다.


1. 핵심 내용 요약

(1) Lavender: AI 기반 “표적 생성 기계”

핵심 정의:

→ <대규모 감시 데이터를 이용해 “잠재적 전투원”을 자동 식별하는 AI 시스템>

작동 방식:

  • 수십만~100만 명 이상의 팔레스타인 데이터 분석
  • “의심 신호” 기반 점수화 (1~100)
    • WhatsApp 그룹 연결
    • 휴대폰 교체 패턴
    • 주소 변경
  • 기존 “하마스 구성원 데이터”를 학습 데이터로 사용

핵심 결과:

→ 약 37,000명을 “잠재적 타겟”으로 분류


(2) 오류와 인간 검증의 붕괴

  • 샘플 검증 결과:
    → 약 10%는 민간인으로 추정
  • 그러나 실제 운영:
    • 검토 시간: 약 20초
    • 확인 내용: “남자인가?” 정도

즉:

→ <사실상 인간 검증이 형식화됨>


(3) Where’s Daddy: “집에 들어오는 순간 공격”

두 번째 시스템:

→ <표적의 위치를 추적하여 “집에 들어왔을 때” 공격>

왜 집인가?

  • 자동화하기 가장 쉬운 위치
  • 개인과 장소를 연결하기 용이

결과:

→ <가족 단위 사망 증가>

  • UN 통계:
    → 사망자의 절반 이상이 약 1,000개 가족에 집중

(4) 공격 방식의 변화

과거:

  • 고위 지휘관 중심
  • 제한적 표적

현재 (보도 기준):

→ <하마스 군사조직 전체를 제거 대상화>

즉:

  • 저급 조직원까지 포함
  • 대량 표적 생성 필요 → AI 도입

(5) “허용된 민간인 피해” 설정

가장 충격적인 부분:

  • 저급 목표:
    → 15~20명 민간인 허용
  • 고위 목표:
    → 최대 300명 허용 사례 주장

비교:

  • 미국:
    • 일반 목표: 0
    • 예외: 15명 수준

→ <비례성 기준의 급격한 변화>


(6) 내부 인식

정보원 증언:

  • “AI가 냉정하게 결정해줘서 좋았다”
  • “우리는 기계의 도장을 찍는 인간이었다”
  • “장기적으로는 오히려 더 위험해진다”

핵심:

→ <도덕적 책임의 외주화>


2. 이 보도의 핵심 의미 (구조적 해석)

(1) 전쟁의 산업화: “타겟 생산 체제”

이 사건의 본질은 단순 AI 문제가 아닙니다.

→ <전쟁이 “생산 시스템”으로 전환됨>

구조:

  • 입력: 대규모 감시 데이터
  • 처리: AI 분류
  • 출력: 수만 개 표적

즉:

→ <“적을 찾는 전쟁” → “적을 생산하는 전쟁”>


(2) 인간 판단의 축소

전통 전쟁:

  • 정보 분석
  • 판단
  • 책임

현재:

→ AI → 인간은 승인만

결과:

→ <윤리적 마찰 감소>


(3) “비례성” 개념의 붕괴

국제인도법 핵심:

→ <군사적 이익 vs 민간 피해 균형>

하지만 이 사례에서:

  • “과도함” 정의가 완전히 내부화됨
  • 기준이 사실상 무제한 확장 가능

즉:

→ <법적 개념이 정치적 선택으로 변함>


(4) 감정 → 기술로의 전환

흥미로운 지점:

  • 10월 7일 이후 “복수 감정”
  • 그러나 실행은 “AI 자동화”

결합 결과:

→ <감정적 동기 + 기술적 효율 = 대량 살상 구조>


(5) 장기 전략 실패 가능성

내부 분석:

→ “단기적 안전 ↑ / 장기적 안전 ↓”

이유:

  • 가족 단위 피해
  • 사회 전체의 적대감 증가

즉:

→ <전술적 성공 vs 전략적 실패 가능성>


3. 신뢰성 평가 (중요)

이 자료는 **확정 사실이 아니라 “조사 보도”**입니다.

(1) 강점

  • 정보원: 6명 정보기관 내부자
  • 다중 소스
  • 문서 + 증언 결합
  • 현장 피해자 인터뷰

→ <중간 이상 신뢰도>


(2) 한계

  • 익명 소스 중심
  • 군 공식 부인 존재
  • 일부 기술 상세 공개 제한

→ <완전 검증 불가능>


(3) 평가

→ <“부분적으로 매우 신뢰할 수 있지만 전체 구조는 추가 검증 필요”>


4. 더 큰 맥락에서의 의미

세진님이 계속 추적해오신 흐름과 연결하면:


(1) AI 전쟁의 시작

이 사건은 첫 사례 중 하나:

→ <AI가 “전술 보조”가 아니라 “표적 생성 핵심”이 된 사례>


(2) 미국·서구 전쟁과의 차이

미국:

  • 표적 수 제한
  • 법적 검토 강조

이 사례:

→ <속도 + 규모 우선>


(3) 현대 전쟁의 3단계 변화

① 감시 전쟁
② 정밀 타격 전쟁
③ AI 자동화 전쟁 ← 지금


(4) 윤리 구조 변화

과거:

→ “사람이 죽인다”

현재:

→ “시스템이 죽인다”

결과:

→ <책임 주체의 희석>


5. 비판적 평가

(1) 가장 중요한 문제

이 보도의 핵심은 “오류율”이 아닙니다.

→ <규모>

  • 10% 오류보다
  • 37,000 표적이 더 중요한 문제

(2) 진짜 위험

→ “틀릴 수 있다”가 아니라

→ <“너무 잘 작동한다”>

즉:

  • 대량 살상이 가능해짐
  • 인간 저항 감소

(3) 정치적 공백

마지막 메시지:

→ 군사 전략은 있지만
→ 정치 해결책 없음

이 구조는:

→ <끝없는 충돌 재생산>


6. 전체 결론

한 문장 정리:

→ <이 사건은 AI의 문제가 아니라, 전쟁 자체가 “자동화된 대량 표적 시스템”으로 변하고 있다는 신호다>


7. 세진님을 위한 핵심 질문

이 자료가 던지는 가장 중요한 질문은 이것입니다:

  1. <AI가 전쟁을 더 “정확하게” 만드는가, 더 “대규모”로 만드는가?>
  2. <비례성 판단은 기술로 대체될 수 있는가?>
  3. <이 구조는 다른 국가로 확산될 것인가?>

원하시면 이어서:

👉 <이 보도 vs 이스라엘 주류 안보 담론 vs 내부 비판자(Breaking the Silence 등) 3축 비교>
👉 <AI 전쟁: 미국·중국·이스라엘 실제 개발 수준 비교>
👉 <국제법(전쟁법) 기준에서 이 사례의 위치>

까지 깊게 연결해드리겠습니다.

==


==


No comments:

Post a Comment

Israel accused of using AI to choose Gaza targets - Transcript | CBC Radio

Israel accused of using AI to choose Gaza targets - Transcript | CBC Radio Front Burner Transcript for April 8, 2024 Host: Jayme Poisson JAY...