Meta on Trial: How New Mexico’s Child-Safety Case Could Change Social Media Forever
The New Mexico trial that could redefine what Big Tech owes children
On Monday morning, in a courtroom in Santa Fe, a state attorney named Donald Migliori stood before twelve jurors and six alternates and began to describe a world that most of them probably carried in their pockets. It was the world of Instagram and Facebook—a world of feeds and follows, of algorithmic nudges and infinite scroll—and Migliori, a partner at the plaintiffs’ firm Motley Rice, was there to argue that its architect, Meta Platforms, had knowingly allowed that world to become a hunting ground for people who prey on children. “The theme throughout this trial,” he told the jury, “is going to be that Meta puts profits over safety.”
The trial that opened that day—projected to last six to seven weeks—is the culmination of a lawsuit filed in December, 2023, by New Mexico’s attorney general, Raúl Torrez, and it represents something unusual in the growing catalogue of legal actions against social-media companies: the first standalone state trial to take Meta to a jury over allegations of child sexual exploitation. What makes it unusual is not merely the gravity of the accusations but the manner in which they were assembled. New Mexico didn’t just comb through internal documents and depose former employees. It ran an undercover operation.
The legal architecture of the case is worth pausing over, because it reflects a particular ingenuity. Section 230 of the Communications Decency Act has long provided tech platforms with a powerful shield: they cannot, as a general rule, be held liable as the publisher or speaker of content that their users create. New Mexico’s attorneys built their case to sidestep that shield entirely, advancing two distinct theories of liability that have nothing to do with publishing.
The first concerns product design. The complaint alleges, in painstaking detail, that Meta’s recommendation algorithms don’t merely host child sexual abuse material—they actively curate it. The systems connect users seeking such material with those who sell it, weave networks among pedophilic accounts, and funnel sexual content toward profiles that the platform itself has identified as belonging to minors. This is not the passive intermediary of Section 230 lore. It is, in the state’s telling, a matchmaking service for predators.
The second theory targets Meta’s public statements. Mark Zuckerberg and other executives repeatedly told Congress, the press, and the public that child safety was a “top priority,” that the platforms were “safe and good for kids,” and that illegal content was effectively policed. Internal documents—many of them surfaced by the whistleblower Frances Haugen—paint a starkly different picture.
The evidentiary backbone of the case is an undercover investigation that reads less like a consumer-protection inquiry and more like the kind of operation one associates with organized-crime task forces. The state’s investigators created fictitious accounts for minors—including a twelve-year-old named Sunny Paxton and a thirteen-year-old named Issa Bee—and then watched, with the methodical patience of field researchers, to see what Meta’s platforms would deliver to them.
What the platforms delivered was remarkable. The investigation, which bore the codename “Operation MetaPhile,” eventually led to the arrest of three men in 2024, but its most damning yield was not criminal defendants—it was documentation.
Consider Issa Bee. Her Facebook profile listed a false birth year—2002, making her nominally an adult—to avoid the platform’s age-gating, but everything else about the account announced, as loudly as a social-media profile can, that she was a child. She posted about the school cafeteria, the school bus, the first day of seventh grade. Her musical tastes ran to Olivia Rodrigo and Harry Styles. She mourned, in one post, the loss of her last baby tooth. Within a short time, the account had accumulated five thousand friends and more than sixty-seven hundred followers—nearly all of them adult men, with the largest clusters in Nigeria, Ghana, and the Dominican Republic. Meta, confronted with these rather conspicuous red flags, did not intervene. Instead, it invited the thirteen-year-old to set up a professional account and begin monetizing her audience.
Sunny Paxton’s account required several attempts to create; the platform rejected the real date of birth each time. On the fourth try, using the same device and the same identifying information but a different birth year, the account went live. Within forty-eight hours, Sunny had amassed nearly six hundred Facebook friends. Before she had conducted a single search, the algorithm had begun recommending sexually explicit content and groups devoted to masturbation, bondage, and fetishism.
Then there was Sophia Keys, the fourteen-year-old who joined teen dating groups on Facebook and was migrated to WhatsApp, where an adult man offered her between a hundred and twenty thousand and a hundred and eighty thousand dollars to appear in a pornographic video, adding—with the grotesque specificity of a business proposal—that he accepted participants “from the age of ten.”
According to the complaint, Meta removed none of the posts, accounts, or messages generated by these interactions.
Meta’s defense, argued by Kevin Huff of the firm Kellogg Hansen Todd Figel & Frederick, rests on a claim that might be summarized as radical transparency. Meta, Huff told the jury, publicly discloses its policies, warns users that they may encounter inappropriate content, and openly acknowledges that no system of safeguards is perfect. “That’s disclosure, not deception,” he said—a formulation that has the compact elegance of a slogan, even if the state would argue it has the moral elegance of a disclaimer on a pack of cigarettes.
Huff pointed to Teen Accounts, a feature for users between thirteen and seventeen that restricts content visibility, limits contact from strangers, and caps time on the app. The feature launched on Instagram in 2024 and was extended globally to Facebook and Messenger the following year. Meta, Huff argued, implemented these protections knowing they would reduce teen traffic—hardly the behavior of a company that puts profits before children.
The defense also attacked the ethics of the investigation itself. The state’s investigators, Huff noted, created accounts using false adult birth dates—specifically to circumvent the teen safety features whose inadequacy the state now decries—and used photographs of real people to lure known pedophiles into engagement. “The state designed the investigation to intentionally circumvent Meta’s safeguards,” Huff told the jury, “to make Facebook and Instagram look more dangerous than they really are.”
It is, as legal arguments go, a double-edged observation. The state would presumably respond that any thirteen-year-old with a working knowledge of arithmetic could do exactly what its investigators did—and that Meta knows this perfectly well.
The trial in Santa Fe does not exist in isolation. That same week, a parallel multi-state proceeding opened in Los Angeles, focussed on the addictive design of social-media platforms. TikTok and Snapchat settled their claims in that case; Meta and Google’s YouTube remain as defendants.
But the New Mexico case stands apart in its methodology. Most litigation against Big Tech proceeds from the inside out—leaked documents, whistleblower testimony, the paper trail of corporate negligence. New Mexico went the other direction. It built its case from the outside in, documenting in real time what Meta’s platforms actually do when a child—or someone who appears to be a child—shows up. The approach has more in common with drug-enforcement stings than with consumer-protection litigation, and it produces a different kind of evidence: not what the company said in a boardroom about what might happen, but what happened.
One of the most revealing threads in the case concerns a metric. Meta uses a measure called “prevalence” to publicly report how much problematic content appears on its platforms. The metric calculates the percentage of content views that violate the company’s community standards relative to total views. The numbers that Meta has consistently reported are vanishingly small—hundredths of a percent.
In the summer of 2021, however, Meta’s own researchers conducted an internal study called BEEF—an acronym for Bad Experiences and Encounters Framework—that told a very different story. Fifty-one per cent of Instagram users reported having a negative or harmful experience in the preceding seven days. Among users aged thirteen to fifteen, a significant share reported receiving unwanted sexual advances—initially calculated at 24.4 per cent, though later revised by the research team to approximately thirteen per cent. Only one per cent of users who had such experiences reported them, and only two per cent of those reports resulted in content being removed.
The gap between Meta’s public prevalence figures and the BEEF findings is not a rounding error. It is an abyss.
Arturo Bejar, a former director of engineering at Meta who worked at the company from 2009 to 2015 and returned as a consultant from 2019 to 2021, testified before the Senate on November 7, 2023, that Zuckerberg and other company leaders “focused on the prevalence measure because it created a distorted picture about the safety of Meta’s platforms.” He emailed Zuckerberg directly on October 5, 2021, attaching the BEEF results and asking what he later described as the essential question: “What if policy-based solutions only cover a single-digit percentage of what is harming people?” Zuckerberg, Bejar testified, did not respond. Bejar is on the witness list for the New Mexico trial.
In his opening statement, Migliori walked the jury through a series of juxtapositions—public declarations set against internal records—that had the quality of a call-and-response between the company Meta claimed to be and the company it was.
In 2018, Zuckerberg told Congress that Meta prioritized child safety “above everything else.” Internally, the hierarchy ran differently. “It’s not safety first,” Migliori told the jury, summarizing the documents. “It’s growth and freedom of expression first.” He showed the jury a passage from one of the internal documents in which Zuckerberg wrote: “Keeping people safe is the counterbalance, not the main point.”
Zuckerberg also assured Congress that Meta does not permit children under thirteen on its platforms. Internal data, Migliori said, indicated that more than four million Instagram users were younger than thirteen. A 2020 internal memo described inappropriate interactions with children as “through the roof.” One employee wrote that Meta had not been fulfilling its legal obligation to report exploitative images of children to the National Center for Missing and Exploited Children.
What is ultimately at stake in Santa Fe extends well beyond the borders of New Mexico or, for that matter, the United States. In the European Union, the Digital Services Act—Regulation 2022/2065—imposes on very large online platforms a duty to assess systemic risks, including risks to the protection of minors, and to implement proportionate mitigation measures. Article 35 of the D.S.A. specifically identifies “any actual or foreseeable negative effects” on children as a category of risk subject to mandatory analysis. The question of whether a recommendation algorithm can transform a platform from a passive intermediary into an active participant in harm is not merely a question for American tort law. It is the central regulatory question of the age.
New Mexico is seeking injunctive relief, disgorgement of profits, and civil penalties of up to five thousand dollars per violation under the state’s Unfair Practices Act—a figure that, given Meta’s scale of operations, could multiply into the hundreds of millions or beyond. Zuckerberg, though dismissed as a defendant in his personal capacity, was deposed and may still be called to testify.
Before a jury of twelve and six alternates, under the supervision of Judge Bryan Biedscheid, the case will turn on questions that are at once technical and moral: whether an algorithm that connects predators with children constitutes a defective product; whether a company that knows its safety metrics are misleading and publishes them anyway is engaged in deception; whether the gap between what Meta says and what Meta does is a matter for a jury to weigh.
The answers, whatever they are, will not arrive for weeks. In the meantime, the platforms will continue to operate, the algorithms will continue to recommend, and somewhere, a twelve-year-old will continue to receive friend requests from strangers. The trial in Santa Fe is, in a sense, a test of whether the law can keep pace with the technology it is asked to govern—or whether it will remain, as it has for much of the social-media era, a step behind and a courthouse away.

Founder and Managing Partner of Skarbiec Law Firm, recognized by Dziennik Gazeta Prawna as one of the best tax advisory firms in Poland (2023, 2024). Legal advisor with 19 years of experience, serving Forbes-listed entrepreneurs and innovative start-ups. One of the most frequently quoted experts on commercial and tax law in the Polish media, regularly publishing in Rzeczpospolita, Gazeta Wyborcza, and Dziennik Gazeta Prawna. Author of the publication “AI Decoding Satoshi Nakamoto. Artificial Intelligence on the Trail of Bitcoin’s Creator” and co-author of the award-winning book “Bezpieczeństwo współczesnej firmy” (Security of a Modern Company). LinkedIn profile: 18 500 followers, 4 million views per year. Awards: 4-time winner of the European Medal, Golden Statuette of the Polish Business Leader, title of “International Tax Planning Law Firm of the Year in Poland.” He specializes in strategic legal consulting, tax planning, and crisis management for business.