Musk Found Not Liable in Tesla Tweet Trial

Jurors on Friday cleared Elon Musk of liability for investors’ losses in a fraud trial over his 2018 tweets falsely claiming that he had funding in place to take Tesla private.

The tweets sent the Tesla share price on a rollercoaster ride, and Musk was sued by shareholders who said the tycoon acted recklessly in an effort to squeeze investors who had bet against the company.

Jurors deliberated for barely two hours before returning to the San Francisco courtroom to say they unanimously agreed that neither Musk nor the Tesla board perpetrated fraud with the tweets and in their aftermath.

“Thank goodness, the wisdom of the people has prevailed!” tweeted Musk, who had tried but failed to get the trial moved to Texas on the grounds jurors in California would be biased against him.

“I am deeply appreciative of the jury’s unanimous finding of innocence in the Tesla 420 take-private case.”

Attorney Nicholas Porritt, who represents Glen Littleton and other investors in Tesla, had argued in court that the case was about making sure the rich and powerful have to abide by the same stock market rules as everyone else.

“Elon Musk published tweets that were false with reckless disregard as to their truth,” Porritt told the panel of nine jurors during closing arguments.

Porritt pointed to expert testimony estimating that Musk’s claim about funding, which turned out not to be true, cost investors billions of dollars overall and that Musk and the Tesla board should be made to pay damages.

But Musk attorney Alex Spiro successfully countered that the billionaire may have erred on wording in a hasty tweet, but that he did not set out to deceive anyone.

Spiro also portrayed the mercurial entrepreneur, who now owns Twitter, as having had a troubled childhood and having come to the United States as a poor youth chasing dreams.

No joke

Musk testified during three days on the witness stand that his 2018 tweet about taking Tesla private at $420 a share was no joke and that Saudi Arabia’s sovereign wealth fund was serious about helping him do it.

“To Elon Musk, if he believes it or even just thinks about it then it’s true no matter how objectively false or exaggerated it may be,” Porritt told jurors.

Tesla and its board were also to blame, because they let Musk use his Twitter account to post news about the company, Porritt argued.

The case revolved around a pair of tweets in which Musk said “funding secured” for a project to buy out the publicly traded electric automaker, then in a second tweet added that “investor support is confirmed.”

“He wrote two words ‘funding secured’ that were technically inaccurate,” Spiro said of Musk while addressing jurors.

“Whatever you think of him, this isn’t a bad tweeter trial, it’s a ‘did they prove this man committed fraud?’ trial.”

Musk did not intend to deceive anyone with the tweets and had the connections and wealth to take Tesla private, Spiro contended.

During the trial playing out in federal court in San Francisco, Spiro said that even though the tweets may have been a “reckless choice of words,” they were not fraud.

“I’m being accused of fraud; it’s outrageous,” Musk said while testifying in person.

Musk said he fired off the tweets at issue after learning of a Financial Times story about a Saudi Arabian investment fund wanting to acquire a stake in Tesla.

The trial came at a sensitive time for Musk, who has dominated the headlines for his chaotic takeover of Twitter where he has laid off more than half of the 7,500 employees and scaled down content moderation. 

ChatGPT: The Promises, Pitfalls and Panic

Excitement around ChatGPT — an easy to use AI chatbot that can deliver an essay or computer code upon request and within seconds — has sent schools into panic and turned Big Tech green with envy.

The potential impact of ChatGPT on society remains complicated and unclear even as its creator Wednesday announced a paid subscription version in the United States.

Here is a closer look at what ChatGPT is (and is not):

Is this a turning point?  

It is entirely possible that November’s release of ChatGPT by California company OpenAI will be remembered as a turning point in introducing a new wave of artificial intelligence to the wider public.  

What is less clear is whether ChatGPT is actually a breakthrough with some critics calling it a brilliant PR move that helped OpenAI score billions of dollars in investments from Microsoft.

Yann LeCun, Chief AI Scientist at Meta and professor at New York University, believes “ChatGPT is not a particularly interesting scientific advance,” calling the app a “flashy demo” built by talented engineers.

LeCun, speaking to the Big Technology Podcast, said ChatGPT is void of “any internal model of the world” and is merely churning “one word after another” based on inputs and patterns found on the internet.

“When working with these AI models, you have to remember that they’re slot machines, not calculators,” warned Haomiao Huang of Kleiner Perkins, the Silicon Valley venture capital firm.

“Every time you ask a question and pull the arm, you get an answer that could be marvelous… or not… The failures can be extremely unpredictable,” Huang wrote in Ars Technica, the tech news website.

Just like Google

ChatGPT is powered by an AI language model that is nearly three years old — OpenAI’s GPT-3 — and the chatbot only uses a part of its capability.  

The true revolution is the humanlike chat, said Jason Davis, research professor at Syracuse University.

“It’s familiar, it’s conversational and guess what? It’s kind of like putting in a Google search request,” he said.

ChatGPT’s rockstar-like success even shocked its creators at OpenAI, which received billions in new financing from Microsoft in January.

“Given the magnitude of the economic impact we expect here, more gradual is better,” OpenAI CEO Sam Altman said in an interview to StrictlyVC, a newsletter.

“We put GPT-3 out almost three years ago… so the incremental update from that to ChatGPT, I felt like should have been predictable and I want to do more introspection on why I was sort of miscalibrated on that,” he said.

The risk, Altman added, was startling the public and policymakers and on Tuesday his company unveiled a tool for detecting text generated by AI amid concerns from teachers that students may rely on artificial intelligence to do their homework.

What now?

From lawyers to speechwriters, from coders to journalists, everyone is waiting breathlessly to feel disruption caused by ChatGPT. OpenAI just launched a paid version of the chatbot – $20 per month for an improved and faster service.

For now, officially, the first significant application of OpenAI’s tech will be for Microsoft software products.  

Though details are scarce, most assume that ChatGPT-like capabilities will turn up on the Bing search engine and in the Office suite.

“Think about Microsoft Word. I don’t have to write an essay or an article, I just have to tell Microsoft Word what I wanted to write with a prompt,” said Davis.

He believes influencers on TikTok and Twitter will be the earliest adopters of this so-called generative AI since going viral requires huge amounts of content and ChatGPT can take care of that in no time.

This of course raises the specter of disinformation and spamming carried out at an industrial scale.  

For now, Davis said the reach of ChatGPT is very limited by computing power, but once this is ramped up, the opportunities and potential dangers will grow exponentially.

And much like the ever imminent arrival of self-driving cars that never quite happens, experts disagree on whether that is a question of months or years.

Ridicule

LeCun said Meta and Google have refrained from releasing AI as potent as ChatGPT out of fear of ridicule and backlash.

Quieter releases of language-based bots – like Meta’s Blenderbot or Microsoft’s Tay for example – were quickly shown capable of generating racist or inappropriate content.

Tech giants have to think hard before releasing something “that is going to spew nonsense” and disappoint, he said.

Zimbabwe Plans to Build $60 Billion ‘Cyber City’ to Ease Harare Congestion

Zimbabwe plans to build “Zim Cyber City,” a modern capital expected to cost up to $60 billion in raised funds and include new government buildings and a presidential palace. Critics are blasting the plan as wasteful when more than half the population lives in poverty and the government has let the current capital, Harare, fall apart. Columbus Mavhunga reports from Mount Hampden, Zimbabwe. Camera: Blessing Chigwenhembe

Zimbabwe Plans to Build $60 Billion ‘Cyber City’ to Easy Harare Congestion

Zimbabwe plans to build “Zim Cyber City,” a modern capital expected to cost up to $60 billion in raised funds and include new government buildings and a presidential palace. Critics are blasting the plan as wasteful when more than half the population lives in poverty and the government has let the current capital, Harare, fall apart. Columbus Mavhunga reports from Mount Hampden, Zimbabwe. Camera: Blessing Chigwenhembe

Boeing Bids Farewell to an Icon, Delivers Last 747 Jumbo Jet

Boeing bid farewell to an icon on Tuesday, delivering its final 747 jumbo jet as thousands of workers who helped build the planes over the past 55 years looked on. 

Since its first flight in 1969, the giant yet graceful 747 has served as a cargo plane, a commercial aircraft capable of carrying nearly 500 passengers, a transport for NASA’s space shuttles, and the Air Force One presidential aircraft. It revolutionized travel, connecting international cities that had never before had direct routes and helping democratize passenger flight. 

But over about the past 15 years, Boeing and its European rival Airbus have introduced more profitable and fuel efficient wide-body planes, with only two engines to maintain instead of the 747’s four. The final plane is the 1,574th built by Boeing in the Puget Sound region of Washington state. 

Thousands of workers joined Boeing and other industry executives from around the world — as well as actor and pilot John Travolta, who has flown 747s — Tuesday for a ceremony in the company’s massive factory north of Seattle, marking the delivery of the last one to cargo carrier Atlas Air. 

“If you love this business, you’ve been dreading this moment,” said longtime aviation analyst Richard Aboulafia. “Nobody wants a four-engine airliner anymore, but that doesn’t erase the tremendous contribution the aircraft made to the development of the industry or its remarkable legacy.” 

Boeing set out to build the 747 after losing a contract for a huge military transport, the C-5A. The idea was to take advantage of the new engines developed for the transport — high-bypass turbofan engines, which burned less fuel by passing air around the engine core, enabling a farther flight range — and to use them for a newly imagined civilian aircraft. 

It took more than 50,000 Boeing workers less than 16 months to churn out the first 747 — a Herculean effort that earned them the nickname “The Incredibles.” The jumbo jet’s production required the construction of a massive factory in Everett, north of Seattle — the world’s largest building by volume. The factory wasn’t even completed when the first planes were finished. 

Among those in attendance was Desi Evans, 92, who joined Boeing at its factory in Renton, south of Seattle, in 1957 and went on to spend 38 years at the company before retiring. One day in 1967, his boss told him he’d be joining the 747 program in Everett — the next morning. 

“They told me, ‘Wear rubber boots, a hard hat and dress warm, because it’s a sea of mud,'” Evans recalled. “And it was — they were getting ready for the erection of the factory.” 

He was assigned as a supervisor to help figure out how the interior of the passenger cabin would be installed and later oversaw crews that worked on sealing and painting the planes. 

“When that very first 747 rolled out, it was an incredible time,” he said as he stood before the last plane, parked outside the factory. “You felt elated — like you’re making history. You’re part of something big, and it’s still big, even if this is the last one.” 

The plane’s fuselage was 225 feet (68.5 meters) long and the tail stood as tall as a six-story building. The plane’s design included a second deck extending from the cockpit back over the first third of the plane, giving it a distinctive hump and inspiring a nickname, the Whale. More romantically, the 747 became known as the Queen of the Skies. 

Some airlines turned the second deck into a first-class cocktail lounge, while even the lower deck sometimes featured lounges or even a piano bar. One decommissioned 747, originally built for Singapore Airlines in 1976, has been converted into a 33-room hotel near the airport in Stockholm. 

“It was the first big carrier, the first widebody, so it set a new standard for airlines to figure out what to do with it, and how to fill it,” said Guillaume de Syon, a history professor at Pennsylvania’s Albright College who specializes in aviation and mobility. “It became the essence of mass air travel: You couldn’t fill it with people paying full price, so you need to lower prices to get people onboard. It contributed to what happened in the late 1970s with the deregulation of air travel.” 

The first 747 entered service in 1970 on Pan Am’s New York-London route, and its timing was terrible, Aboulafia said. It debuted shortly before the oil crisis of 1973, amid a recession that saw Boeing’s employment fall from 100,800 employees in 1967 to a low of 38,690 in April 1971. The “Boeing bust” was infamously marked by a billboard near the Seattle-Tacoma International Airport that read, “Will the last person leaving SEATTLE — Turn out the lights.” 

An updated model — the 747-400 series — arrived in the late 1980s and had much better timing, coinciding with the Asian economic boom of the early 1990s, Aboulafia said. He took a Cathay Pacific 747 from Los Angeles to Hong Kong as a twentysomething backpacker in 1991. 

“Even people like me could go see Asia,” Aboulafia said. “Before, you had to stop for fuel in Alaska or Hawaii and it cost a lot more. This was a straight shot — and reasonably priced.” 

Delta was the last U.S. airline to use the 747 for passenger flights, which ended in 2017, although some other international carriers continue to fly it, including the German airline Lufthansa. 

Lufthansa CEO Carsten Spohr recalled traveling in a 747 as a young exchange student and said that when he realized he’d be traveling to the West Coast of the U.S. for Tuesday’s event, there was only one way to go: riding first-class in the nose of a Lufthansa 747 from Frankfurt to San Francisco. He promised the crowd Lufthansa would keep flying the 747 for many years to come. 

“We just love the airplane,” he said. 

Atlas Air ordered four 747-8 freighters early last year, with the final one — emblazoned with an image of Joe Sutter, the engineer who oversaw the 747’s original design team — delivered Tuesday. Atlas CEO John Dietrich called the 747 the greatest air freighter, thanks in part to its unique capacity to load through the nose cone. 

Cheaters Beware: ChatGPT Maker Releases AI Detection Tool 

The maker of ChatGPT is trying to curb its reputation as a freewheeling cheating machine with a new tool that can help teachers detect if a student or artificial intelligence wrote that homework.

The new AI Text Classifier launched Tuesday by OpenAI follows a weeks-long discussion at schools and colleges over fears that ChatGPT’s ability to write just about anything on command could fuel academic dishonesty and hinder learning.

OpenAI cautions that its new tool – like others already available – is not foolproof. The method for detecting AI-written text “is imperfect and it will be wrong sometimes,” said Jan Leike, head of OpenAI’s alignment team tasked to make its systems safer.

“Because of that, it shouldn’t be solely relied upon when making decisions,” Leike said.

Teenagers and college students were among the millions of people who began experimenting with ChatGPT after it launched November 30 as a free application on OpenAI’s website. And while many found ways to use it creatively and harmlessly, the ease with which it could answer take-home test questions and assist with other assignments sparked a panic among some educators.

By the time schools opened for the new year, New York City, Los Angeles and other big public school districts began to block its use in classrooms and on school devices.

The Seattle Public Schools district initially blocked ChatGPT on all school devices in December but then opened access to educators who want to use it as a teaching tool, said Tim Robinson, the district spokesman.

“We can’t afford to ignore it,” Robinson said.

The district is also discussing possibly expanding the use of ChatGPT into classrooms to let teachers use it to train students to be better critical thinkers and to let students use the application as a “personal tutor” or to help generate new ideas when working on an assignment, Robinson said.

School districts around the country say they are seeing the conversation around ChatGPT evolve quickly.

“The initial reaction was ‘OMG, how are we going to stem the tide of all the cheating that will happen with ChatGPT,'” said Devin Page, a technology specialist with the Calvert County Public School District in Maryland. Now there is a growing realization that “this is the future” and blocking it is not the solution, he said.

“I think we would be naïve if we were not aware of the dangers this tool poses, but we also would fail to serve our students if we ban them and us from using it for all its potential power,” said Page, who thinks districts like his own will eventually unblock ChatGPT, especially once the company’s detection service is in place.

OpenAI emphasized the limitations of its detection tool in a blog post Tuesday, but said that in addition to deterring plagiarism, it could help to detect automated disinformation campaigns and other misuse of AI to mimic humans.

The longer a passage of text, the better the tool is at detecting if an AI or human wrote something. Type in any text — a college admissions essay, or a literary analysis of Ralph Ellison’s “Invisible Man” — and the tool will label it as either “very unlikely, unlikely, unclear if it is, possibly, or likely” AI-generated.

But much like ChatGPT itself, which was trained on a huge trove of digitized books, newspapers and online writings but often confidently spits out falsehoods or nonsense, it’s not easy to interpret how it came up with a result.

“We don’t fundamentally know what kind of pattern it pays attention to, or how it works internally,” Leike said. “There’s really not much we could say at this point about how the classifier actually works.”

“Like many other technologies, it may be that one district decides that it’s inappropriate for use in their classrooms,” said OpenAI policy researcher Lama Ahmad. “We don’t really push them one way or another.”

Huawei Latest Target of US Crackdown on China Tech

China says it is “deeply concerned” over reports that the United States is moving to further restrict sales of American technology to Huawei, a tech company that U.S. officials have long singled out as a threat to national security for its alleged support of Beijing’s espionage efforts.

As first reported by the Financial Times, the U.S. Department of Commerce has informed American firms that it will no longer issue licenses for technology exports to Huawei, thereby isolating the Shenzen-based company from supplies it needs to make its products.

The White House and Commerce Department have not responded to VOA’s request for confirmation of the reports. But observers say the move may be the latest tactic in the Biden administration’s geoeconomics strategy as it comes under increasing Republican pressure to outcompete China. 

The crackdown on Chinese companies began under the Trump administration, which in 2019 added Huawei to an export blacklist but made exceptions for some American firms, including Qualcomm and Intel, to provide non-5G technology licenses.

Since taking office in 2021, President Joe Biden has taken an even more aggressive stance than his predecessor, Donald Trump. Now the Biden administration appears to be heading toward a total ban on all tech exports to Huawei, said Sam Howell, who researches quantum information science at the Center for a New American Security’s Technology and National Security program.

“These new restrictions from what we understand so far would include items below the 5G level,” she told VOA. “So 4G items, Wi-Fi 6 and [Wi-Fi] 7, artificial intelligence, high performance computing and cloud capabilities as well.”

Should the Commerce Department follow through with the ban, there will likely be pushback from U.S. companies whose revenues will be directly affected, Howell said. Currently Intel and Qualcomm still sell chips used in laptops and phones manufactured by Huawei.

Huawei and Beijing have denied that they are a threat to other countries’ national security. Foreign ministry spokesperson Mao Ning accused Washington of “overstretching the concept of national security and abusing state power” to suppress Chinese competitors.

“Such practices are contrary to the principles of market economy” and are “blatant technological hegemony,” Mao said. 

Outcompeting Chinese tech

The latest U.S. move on Huawei is part of a U.S. effort to outcompete China in the cutting-edge technology sector.

In October, Biden imposed sweeping restrictions on providing advanced semiconductors and chipmaking equipment to Chinese companies, seeking to maintain dominance particularly on the most advanced chips. His administration is rallying allies behind the effort, including the Netherlands, Japan, South Korea and Taiwan – home to leading companies that play key roles in the industry’s supply chain.

U.S. officials say export restrictions on chips are necessary because China can use semiconductors to advance their military systems, including weapons of mass destruction, and commit human rights abuses. 

The October restrictions follow the CHIPS and Science Act of 2022, which Biden signed into law in August and that restricts companies receiving U.S. subsidies from investing in and expanding cutting-edge chipmaking facilities in China. It also provides $52 billion to strengthen the domestic semiconductor industry.

Beijing has invested heavily in its own semiconductor sector, with plans to invest $1.4 trillion in advanced technologies in a bid to achieve 70% self-sufficiency in semiconductors by 2025. 

TikTok a target

TikTok, a social media application owned by the Chinese company ByteDance that has built a massive following especially among American youth, is also under U.S. lawmakers’ scrutiny due to suspicion that it could be used as a tool of Chinese foreign espionage or influence.

CEO Shou Zi Chew is scheduled to appear before the House Energy and Commerce Committee on March 23 to testify about TikTok’s “consumer privacy and data security practices, the platforms’ impact on kids, and their relationship with the Chinese Communist Party.”

Lawmakers are divided on whether to ban or allow the popular app, which has been downloaded onto about 100 million U.S. smartphones, or force its sale to an American buyer.

Earlier in January, Congress set up the House Select Committee on China, tasked with dealing with legislation to combat the dangers of a rising China.

As Children in US Study Online, Apps Watch Their Every Move 

For New York teacher Michael Flanagan, the pandemic was a crash course in new technology — rushing out laptops to stay-at-home students and shifting hectic school life online.

Students are long back at school, but the technology has lived on, and with it has come a new generation of apps that monitor the pupils online, sometimes round the clock and even on down days shared with family and friends at home.

The programs scan students’ online activity, social media posts and more — aiming to keep them focused, detect mental health problems and flag up any potential for violence.

“You can’t unring the bell,” said Flanagan, who teaches social studies and economics. “Everybody has a device.”

The new trend for tracking, however, has raised fears that some of the apps may target minority pupils, while others have outed LGBT+ students without their consent, and many are used to instill discipline as much as deliver care.

So Flanagan has parted ways with many of his colleagues and won’t use such apps to monitor his students online.

He recalled seeing a demo of one such program, GoGuardian, in which a teacher showed — in real time — what one student was doing on his computer. The child was at home, on a day off.

Such scrutiny raised a big red flag for Flanagan.

“I have a school-issued device, and I know that there’s no expectation of privacy. But I’m a grown man — these kids don’t know that,” he said.

A New York City Department of Education spokesperson said that the use of GoGuardian Teacher “is only for teachers to see what’s on the student’s screen in the moment, provide refocusing prompts, and limit access to inappropriate content.”

Valued at more than $1 billion, GoGuardian — one of a handful of high-profile apps in the market — is now monitoring more than 22 million students, including in the New York City, Chicago and Los Angeles public systems.

Globally, the education technology sector is expected to grow by $133 billion from 2021 to 2026, market researcher Technavio said last year.

Parents expect schools to keep children safe in classrooms or on field trips, and schools also “have a responsibility to keep students safe in digital spaces and on school-issued devices,” GoGuardian said in a statement.

The company says it “provides educators with the ability to protect students from harmful or explicit content”.

Nowadays, online monitoring “is just part of the school environment,” said Jamie Gorosh, policy counsel with the Future of Privacy Forum, a watchdog group.

And even as schools move beyond the pandemic, “it doesn’t look like we’re going back,” she said.

Guns and depression

A key priority for monitoring is to keep students engaged in their academic work, but it also taps into fast-rising concerns over school violence and children’s mental health, which medical groups in 2021 termed a national emergency.

According to federal data released this month, 82% of schools now train staff on how to spot mental health problems, up from 60% in 2018; 65% have confidential threat-reporting systems, up 15% in the same period.

In a survey last year by the nonprofit Center for Democracy and Technology (CDT), 89% of teachers reported their schools were monitoring student online activity.

Yet it is not clear that the software creates safer schools.

Gorosh cited May’s shooting in Uvalde, Texas, that left 21 dead in a school that had invested heavily in monitoring tech.

Some worry the tracking apps could actively cause harm.

The CDT report, for instance, found that while administrators overwhelmingly say the purpose of monitoring software is student safety, “it’s being used far more commonly for disciplinary purposes … and we’re seeing a discrepancy falling along racial lines,” said Elizabeth Laird, director of CDT’s Equity in Civic Technology program.

The programs’ use of artificial intelligence to scan for keywords has also outed LGBT+ students without their consent, she said, noting that 29% of students who identify as LGBT+ said they or someone they knew had experienced this.

And more than a third of teachers said their schools send alerts automatically to law enforcement outside school hours.

“The stated purpose is to keep students safe, and here we have set up a system that is routinizing law enforcement access to this information and finding reasons for them to go into students’ homes,” Laird said.

‘Preyed upon’

A report by federal lawmakers last year into four companies making student monitoring software found that none had made efforts to see if the programs disproportionately targeted marginalized students.

“Students should not be surveilled on the same platforms they use for their schooling,” Senator Ed Markey of Massachusetts, one of the report’s co-authors, told the Thomson Reuters Foundation in a statement.

“As school districts work to incorporate technology in the classroom, we must ensure children and teenagers are not preyed upon by a web of targeted advertising or intrusive monitoring of any kind.”

The Department of Education has committed to releasing guidelines around the use of AI early this year.

A spokesperson said the agency was “committed to protecting the civil rights of all students.”

Aside from the ethical questions around spying on children, many parents are frustrated by the lack of transparency.

“We need more clarity on whether data is being collected, especially sensitive data. You should have at least notification, and probably consent,” said Cassie Creswell, head of Illinois Families for Public Schools, an advocacy group.

Creswell, who has a daughter in a Chicago public school, said several parents have been sent alerts about their children’s online searches, despite not having been asked or told about the monitoring in the first place.

Another child had faced repeated warnings not to play a particular game — even though the student was playing it at home on the family computer, she said.

Creswell and others acknowledge that the issues monitoring aims to address — bullying, depression, violence — are real and need tackling, but question whether technology is the answer.

“If we’re talking about self-harm monitoring, is this the best way to approach the issue?” said Gorosh.

Pointing to evidence suggesting AI is imperfect in capturing the warning signs, she said increased funding for school counselors could be more narrowly tailored to the problem.

“There are huge concerns,” she said. “But maybe technology isn’t the first step to answer some of those issues.”

US, EU Launch Agreement on Artificial Intelligence

The United States and European Union announced Friday an agreement to speed up and enhance the use of artificial intelligence to improve agriculture, health care, emergency response, climate forecasting and the electric grid. 

A senior U.S. administration official, discussing the initiative shortly before the official announcement, called it the first sweeping AI agreement between the United States and Europe. Previously, agreements on the issue had been limited to specific areas such as enhancing privacy, the official said.  

AI modeling, which refers to machine-learning algorithms that use data to make logical decisions, could be used to improve the speed and efficiency of government operations and services.  

“The magic here is in building joint models [while] leaving data where it is,” the senior administration official said. “The U.S. data stays in the U.S. and European data stays there, but we can build a model that talks to the European and the U.S. data, because the more data and the more diverse data, the better the model.” 

The initiative will give governments greater access to more detailed and data-rich AI models, leading to more efficient emergency responses and electric grid management, and other benefits, the administration official said. 

Pointing to the electric grid, the official said the United States collects data on how electricity is being used, where it is generated, and how to balance the grid’s load so that weather changes do not knock it offline. 

Many European countries have similar data points they gather relating to their own grids, the official said. Under the new partnership, all that data would be harnessed into a common AI model that would produce better results for emergency managers, grid operators and others relying on AI to improve systems.  

The partnership is currently between the White House and the European Commission, the executive arm of the 27-member European Union. The senior administration official said other countries would be invited to join in the coming months.  

US Dismantles Ransomware Network Responsible for More Than $100 Million in Extortion

An international ransomware network that extorted more than $100 million from hundreds of victims around the world has been brought down following a monthslong infiltration by the FBI, the Department of Justice announced Thursday.

The group known as Hive targeted more than 1,500 victims, including hospitals, school districts and financial firms in more than 80 countries, the Justice Department said. Officials say the most recent victim in Florida was targeted about two weeks ago.

In a breakthrough, FBI agents armed with a court order infiltrated Hive’s computer networks in July 2022, covertly capturing its decryption keys and offering them to victims, saving the targets $130 million in ransom payments, officials said.

“Cybercrime is a constantly evolving threat. But as I have said before, the Justice Department will spare no resource to identify and bring to justice, anyone, anywhere, who targets the United States with a ransomware attack,” Attorney General Merrick Garland said at a press conference.

Working with German and Dutch law enforcement, the FBI on Wednesday took down the servers that power the Hive network.

“Simply put, using lawful means, we hacked the hackers,” Deputy Attorney General Lisa Monaco said.

While no arrests have been made in connection with the takedown, FBI Director Christopher Wray warned that anybody involved with Hive should be concerned, because this investigation is very much ongoing.

“We’re engaged in what we call ‘joint sequenced operations’ … and that includes going after their infrastructure, going after their crypto and going after the people who work with them,” Wray said.

In a ransomware attack, hackers lock in a victim’s network and then demand payments in exchange for providing a decryption key.

Hive used a “ransomware-as-a-service” model where so-called “administrators” develop a malicious software strain and recruit “affiliates” to deploy them against victims.

Officials said Hive affiliates targeted critical U.S. infrastructure entities.

In August 2021, at the height of the COVID-19 pandemic, Hive affiliates attacked a Midwest hospital’s network, preventing the medical facility from accepting any new patients, Garland said.

It was only able to recover the data after it paid a ransom.

Hive’s takedown is the latest in the Biden administration’s crackdown on ransomware attacks that are on the rise, costing businesses and organizations billions of dollars.

U.S. banks and financial institutions processed nearly $1.2 billion in suspected ransomware payments in 2021, more than double the amount in 2020, the Department of the Treasury’s Financial Crimes Enforcement Network (FinCen) reported in November.

Roughly 75% of the ransomware attacks reported in 2021 had a nexus to Russia, its proxies or persons acting on its behalf, according to FinCen.

The top five highest-grossing ransomware tools used in 2021 were connected to Russian cyber actors, according to FinCen.

Officials would not say whether Hive had any link to Russia.

The Biden administration views ransomware attacks not just as a “pocketbook issue” that affects ordinary Americans but increasingly as a growing national security threat that calls for a coordinated response.

Last year, the White House hosted a two-day international ransomware summit where participants from 36 countries agreed to create a fusion cell at the Regional Cyber Defense Center in Lithuania, followed by an International Counter Ransomware Task Force later this year.

Trump Reinstated to Facebook After 2-Year Ban

Facebook parent Meta is reinstating former President Donald Trump’s personal account after a two-year suspension following the January 6, 2021, insurrection. 

The company said in a blog post Wednesday it is adding “new guardrails” to ensure there are no “repeat offenders” who violate its rules. 

“In the event that Mr. Trump posts further violating content, the content will be removed and he will be suspended for between one month and two years, depending on the severity of the violation,” said Meta, which is based in Menlo Park, California. 

Trump, in a post on his own social media network, blasted Facebook’s decision to suspend his account as he praised his own site, Truth Social. 

“FACEBOOK, which has lost Billions of Dollars in value since “deplatforming” your favorite President, me, has just announced that they are reinstating my account. Such a thing should never again happen to a sitting President, or anybody else who is not deserving of retribution!” he wrote. 

He was suspended on January 7, a day after the deadly 2021 insurrection. Other social media companies also kicked him off their platforms, though he was recently reinstated on Twitter after Elon Musk took over the company. He has not tweeted. 

Banned from mainstream social media, Trump has been relying on Truth Social, which he launched after being blocked from Twitter. 

Microsoft Reports Outage for Teams, Outlook, Other Services

Microsoft said it’s seeing some improvement to problems with its online services including the Teams messaging platform and Outlook email system after users around the world reported outages Wednesday. 

In a status update, the tech company reported “service degradation” for a number of its Microsoft 365 services. 

Thousands of users reported problems with Teams, Outlook, the Azure cloud computing service and XBox Live online gaming service early Wednesday on the Downdetector website, which tracks outage reports. Many users also took to social media to complain that services were down. 

By later in the morning, Downdetector showed the number of reports had dropped considerably. 

“We’re continuing to monitor the recovery across the service and some customers are reporting mitigation,” the Microsoft 365 Status Twitter account said. “We’re also connecting the service to additional infrastructure to expedite the recovery process.” 

It tweeted earlier that it had “isolated the problem to a networking configuration issue” and that a network change suspected to be causing the problem was rolled back. 

It comes after Microsoft reported Tuesday that its quarterly profit fell 12%, reflecting economic uncertainty that the company said led to its decision this month to cut 10,000 workers. 

ChatGPT Bot Passes US Law School Exam

A chatbot powered by reams of data from the internet has passed exams at a U.S. law school after writing essays on topics ranging from constitutional law to taxation and torts.

ChatGPT from OpenAI, a U.S. company that this week got a massive injection of cash from Microsoft, uses artificial intelligence (AI) to generate streams of text from simple prompts.

The results have been so good that educators have warned it could lead to widespread cheating and even signal the end of traditional classroom teaching methods.

Jonathan Choi, a professor at Minnesota University Law School, gave ChatGPT the same test faced by students, consisting of 95 multiple-choice questions and 12 essay questions.

In a white paper titled “ChatGPT goes to law school” published on Monday, he and his coauthors reported that the bot scored a C+ overall.

While this was enough for a pass, the bot was near the bottom of the class in most subjects and “bombed” at multiple-choice questions involving mathematics.

‘Not a great student’

“In writing essays, ChatGPT displayed a strong grasp of basic legal rules and had consistently solid organization and composition,” the authors wrote.

But the bot “often struggled to spot issues when given an open-ended prompt, a core skill on law school exams”.

Officials in New York and other jurisdictions have banned the use of ChatGPT in schools, but Choi suggested it could be a valuable teaching aide.

“Overall, ChatGPT wasn’t a great law student acting alone,” he wrote on Twitter.

“But we expect that collaborating with humans, language models like ChatGPT would be very useful to law students taking exams and to practicing lawyers.”

And playing down the possibility of cheating, he wrote in reply to another Twitter user that two out of three markers had spotted the bot-written paper.

“(They) had a hunch and their hunch was right, because ChatGPT had perfect grammar and was somewhat repetitive,” Choi wrote.

US, 8 States Sue Google on Digital Ad Business Dominance

The U.S. Justice Department filed a lawsuit against Alphabet’s GOOGL.O Google on Tuesday over allegations that the company abused its dominance of the digital advertising business, according to a court document.

“Google has used anticompetitive, exclusionary, and unlawful means to eliminate or severely diminish any threat to its dominance over digital advertising technologies,” the government said in its antitrust complaint.

The Justice Department asked the court to compel Google to divest its Google Ad manager suite, including its ad exchange AdX.

Google did not immediately respond to a request for comment.

The lawsuit is the second federal antitrust complaint filed against Google, alleging violations of antitrust law in how the company acquires or maintains its dominance. The Justice Department lawsuit filed against Google in 2020 focuses on its monopoly in search and is scheduled to go to trial in September.

Eight states joined the department in the lawsuit filed on Tuesday, including Google’s home state of California.

Google shares were down 1.3% on the news.

The lawsuit says “Google has thwarted meaningful competition and deterred innovation in the digital advertising industry, taken supra-competitive profits for itself, prevented the free market from functioning fairly to support the interests of the advertisers and publishers who make today’s powerful internet possible.”

While Google remains the market leader by a long shot, its share of the U.S. digital ad revenue has been eroding, falling to 28.8% last year from 36.7% in 2016, according to Insider Intelligence. Google’s advertising business is responsible for some 80% of its revenue.

AI Tools Can Create New Images, But Who Is the Real Artist?

Countless artists have taken inspiration from “The Starry Night” since Vincent Van Gogh painted the swirling scene in 1889.

Now artificial intelligence systems are doing the same, training themselves on a vast collection of digitized artworks to produce new images you can conjure in seconds from a smartphone app.

The images generated by tools such as DALL-E, Midjourney and Stable Diffusion can be weird and otherworldly but also increasingly realistic and customizable — ask for a “peacock owl in the style of Van Gogh” and they can churn out something that might look similar to what you imagined.

But while Van Gogh and other long-dead master painters aren’t complaining, some living artists and photographers are starting to fight back against the AI software companies creating images derived from their works.

Two new lawsuits —- one this week from the Seattle-based photography giant Getty Images —- take aim at popular image-generating services for allegedly copying and processing millions of copyright-protected images without a license.

Getty said it has begun legal proceedings in the High Court of Justice in London against Stability AI — the maker of Stable Diffusion —- for infringing intellectual property rights to benefit the London-based startup’s commercial interests.

Another lawsuit filed Friday in a U.S. federal court in San Francisco describes AI image-generators as “21st-century collage tools that violate the rights of millions of artists.” The lawsuit, filed by three working artists on behalf of others like them, also names Stability AI as a defendant, along with San Francisco-based image-generator startup Midjourney, and the online gallery DeviantArt.

The lawsuit said AI-generated images “compete in the marketplace with the original images. Until now, when a purchaser seeks a new image ‘in the style’ of a given artist, they must pay to commission or license an original image from that artist.”

Companies that provide image-generating services typically charge users a fee. After a free trial of Midjourney through the chatting app Discord, for instance, users must buy a subscription that starts at $10 per month or up to $600 a year for corporate memberships. The startup OpenAI also charges for use of its DALL-E image generator, and StabilityAI offers a paid service called DreamStudio.

Stability AI said in a statement that “Anyone that believes that this isn’t fair use does not understand the technology and misunderstands the law.”

In a December interview with The Associated Press, before the lawsuits were filed, Midjourney CEO David Holz described his image-making subscription service as “kind of like a search engine” pulling in a wide swath of images from across the internet. He compared copyright concerns about the technology with how such laws have adapted to human creativity.

“Can a person look at somebody else’s picture and learn from it and make a similar picture?” Holz said. “Obviously, it’s allowed for people and if it wasn’t, then it would destroy the whole professional art industry, probably the nonprofessional industry too. To the extent that AIs are learning like people, it’s sort of the same thing and if the images come out differently then it seems like it’s fine.”

The copyright disputes mark the beginning of a backlash against a new generation of impressive tools — some of them introduced just last year — that can generate new images, readable text and computer code on command.

They also raise broader concerns about the propensity of AI tools to amplify misinformation or cause other harm. For AI image generators, that includes the creation of nonconsensual sexual imagery.

Some systems produce photorealistic images that can be impossible to trace, making it difficult to tell the difference between what’s real and what’s AI. And while most have some safeguards in place to block offensive or harmful content, experts say it’s not enough and fear it’s only a matter of time until people utilize these tools to spread disinformation and further erode public trust.

“Once we lose this capability of telling what’s real and what’s fake, everything will suddenly become fake because you lose confidence of anything and everything,” said Wael Abd-Almageed, a professor of electrical and computer engineering at the University of Southern California.

As a test, The Associated Press submitted a text prompt on Stable Diffusion featuring the keywords “Ukraine war” and “Getty Images.” The tool created photo-like images of soldiers in combat with warped faces and hands, pointing and carrying guns. Some of the images also featured the Getty watermark, but with garbled text.

AI can also get things wrong, like feet and fingers or details on ears that can sometimes give away that they’re not real, but there’s no set pattern to look out for. And those visual clues can also be edited. On Midjourney, for instance, users often post on the Discord chat asking for advice on how to fix distorted faces and hands.

With some generated images traveling on social networks and potentially going viral, they can be challenging to debunk since they can’t be traced back to a specific tool or data source, according to Chirag Shah, a professor at the Information School at the University of Washington, who uses these tools for research.

“You could make some guesses if you have enough experience working with these tools,” Shah said. “But beyond that, there is no easy or scientific way to really do this.”

But for all the backlash, there are many people who embrace the new AI tools and the creativity they unleash. Searches on Midjourney, for instance, show curious users are using the tool as a hobby to create intricate landscapes, portraits and art.

There’s plenty of room for fear, but “what can else can we do with them?” asked the artist Refik Anadol this week at the World Economic Forum in Davos, Switzerland, where he displayed an exhibit of his AI-generated work.

At the Museum of Modern Art in New York, Anadol designed “Unsupervised,” which draws from artworks in the museum’s prestigious collection — including “The Starry Night” — and feeds them into a massive digital installation generating animations of mesmerizing colors and shapes in the museum lobby.

The installation is “constantly changing, evolving and dreaming 138,000 old artworks at MoMA’s Archive,” Anadol said. “From Van Gogh to Picasso to Kandinsky, incredible, inspiring artists who defined and pioneered different techniques exist in this artwork, in this AI dream world.”

For painters like Erin Hanson, whose impressionist landscapes are so popular and easy to find online that she has seen their influence in AI-produced visuals, she is not worried about her own prolific output, which makes $3 million a year.

She does, however, worry about the art community as a whole.

“The original artist needs to be acknowledged in some way or compensated,” Hanson said. “That’s what copyright laws are all about. And if artists aren’t acknowledged, then it’s going to make it hard for artists to make a living in the future.”

Google Parent Company To Lay Off 12,000 Workers Globally

Alphabet Inc., the parent company of tech giant Google, announced Friday it is laying off 12,000 workers across the entire company — cuts reflecting six percent of the company’s total workforce.

In an email to employees Friday, Chief Executive Officer Sundar Pichai said the company saw dramatic growth over the past two years and hired new employees “for a different economic reality than the one we face today.” He said he takes full responsibility for the decisions that led to where the company is today.

In his email, Pichai said the layoffs come following “a rigorous review across product areas and functions” to ensure the company’s employees and their roles are aligned with Google’s top priorities. “The roles we’re eliminating reflect the outcome of that review,” he said.

In the email, Pichai said U.S. employees to be laid off already have been notified, while it is going to take longer for employees in other countries because of different laws and regulations.

Google’s decision comes the same week other big tech companies, Meta Platforms Inc. – the parent company of Facebook and Instagram, Twitter Inc., Microsoft and Amazon, announced they were laying off thousands of employees.

Some information for this report was provided by The Associated Press and Reuters. 

 

FBI Chief Says He’s ‘Deeply Concerned’ by China’s AI Program

FBI Director Christopher Wray said Thursday that he was “deeply concerned” about the Chinese government’s artificial intelligence program, asserting that it was “not constrained by the rule of law.”

Speaking during a panel session at the World Economic Forum in Davos, Switzerland, Wray said Beijing’s AI ambitions were “built on top of massive troves of intellectual property and sensitive data that they’ve stolen over the years.”

He said that left unchecked, China could use artificial intelligence advancements to further its hacking operations, intellectual property theft and repression of dissidents inside the country and beyond.

“That’s something we’re deeply concerned about. I think everyone here should be deeply concerned about,” he said.

More broadly, he said, “AI is a classic example of a technology where I have the same reaction every time. I think, ‘Wow, we can do that?’ And then I think, ‘Oh God, they can do that.’”

Such concerns have long been voiced by U.S. officials. In October 2021, for instance, U.S. counterintelligence officials issued warnings about China’s ambitions in AI as part of a renewed effort to inform business executives, academics and local and state government officials about the risks of accepting Chinese investment or expertise in key industries.

Earlier that year, an AI commission led by former Google CEO Eric Schmidt urged the U.S. to boost its AI skills to counter China, including by pursuing “AI-enabled” weapons.

A spokesperson for the Chinese Embassy in Washington did not immediately respond to a request seeking comment Thursday about Wray’s comments. Beijing has repeatedly accused Washington of fearmongering and attacked U.S. intelligence for its assessments of China.

Tech Layoffs Mount as Microsoft, Amazon Shed Staff

Software giant Microsoft on Wednesday became the latest major company in the tech sector to announce significant job cuts when it reported it would lay off 10,000 employees, or about 5% of its workforce.

Microsoft’s job cuts come just as e-commerce leader Amazon begins a fresh round of 18,000 layoffs, extending a wave of other major cuts at Twitter, Salesforce and dozens of smaller technology firms in recent weeks.

The phenomenon of job losses in the tech sector has global reach but has been keenly felt in Silicon Valley and other West Coast tech hubs in the United States. The website layoffs.fyi, which tracks job cuts in the tech industry, has identified well over 100 tech firms announcing layoffs since January 1 across North and South America, Europe, Asia and Australia. In all, the website has counted more than 1,200 firms making layoffs since the beginning of 2022.

Changing environment

In an interview at the World Economic Forum in Davos, Switzerland, on Wednesday, Microsoft CEO Satya Nadella appeared to suggest that retrenchment in the tech sector was a result of reduced consumer demand.

“During the pandemic, there was rapid acceleration,” Nadella said. “I think we’re going to go through a phase today where there is some amount of normalization in demand.”

He said the company would seek to drive growth by increasing its own productivity. The interview took place before Microsoft officially announced the layoffs.

One major focus of the layoffs, according to multiple media reports, was the division of the company that makes augmented reality systems, including the company’s HoloLens goggles and the Integrated Visual Augmentation System, which until recently were being developed in cooperation with the U.S. Army.

Later in the day in an email to employees, Nadella wrote, “These are the kinds of hard choices we have made throughout our 47-year history to remain a consequential company in this industry that is unforgiving to anyone who doesn’t adapt to platform shifts.”

However, he signaled the company would continue hiring in areas such as artificial intelligence that management believes are strategically important.

Also on Wednesday, Doug Herrington, head of Amazon’s global retail business, said his company was restructuring to meet consumers’ demands but would continue to invest in areas where it saw the potential for growth, including its grocery delivery business.

Stronger, perhaps

Wayne Hochwarter, who teaches business administration at Florida State University, described the layoffs at Microsoft and Amazon as examples of businesses making adjustments to their workforces in the face of a changing business climate.

“I think they overestimated the trends in personal purchasing patterns, and they thought, ‘OK, we’re going to make sure we’re not shorthanded,’” he told VOA. “And then when things softened a little bit, they realized they had hired too many people.”

He also warned against reading too much into the latest layoffs.

“I don’t think the tech sector is going to heck in a handbasket,” he said. “They may have reevaluated where things are going to go, but I don’t see this as a catalyst for sending us into economic deterioration, or anything that’s going to put a crimp on the economy.”

Looking to the future, Hochwarter said, the workforce changes are “probably going to make them stronger companies.”

Weathering the storm

Margaret O’Mara, author of the book The Code: Silicon Valley and the Remaking of America, told VOA that the current run of layoffs in the U.S. was just the latest chapter in a long cycle of booms and busts in the tech sector.

In some important respects, she said, it’s a story about more than just a misreading of trends in consumer preferences.

“It’s similar to other downturns, and there have been many — for every boom there was a bust — in that their macro[economic] conditions have shifted,” she said. “Tech is an industry that’s very much fueled by investment capital and the stock market.”

O’Mara said that over the last 10 years, with low interest rates and large amounts of cash flowing through the economy, conditions have been “extraordinary” for the growth of U.S. tech companies. As those conditions change, so does the amount of money investors want to put into tech firms.

However, O’Mara, a professor of American history at the University of Washington, said it was important not to look at conditions today as similar to the catastrophic dot-com bust of 2000.

“Tech is many orders of magnitude larger than it ever has been before,” she said. “We are talking about platform companies that are unlike the dot-coms, which were very young and very frothy, and it was easy for their value to collapse. They weren’t providing the essential services … fundamental to the rest of the economy.”

By contrast, she said, companies like Microsoft and Amazon have deep connections to the broader U.S. economy and should be able to withstand the current economic headwinds.

Difficult for H-1B visa holders

A disproportionate share of workers in the U.S. technology sector are non-citizens who hold H-1B visas, which allow companies to sponsor them. Layoffs are particularly difficult for visa holders — the overwhelming majority of whom are from India — because once their employment is terminated, they have just 60 days to find a new sponsor. Otherwise, they are required to leave the country.

Hochwarter said he thought companies would pull back on hiring H-1B visa workers, at least for the time being.

“My sense is that because that takes a great deal of effort and energy on the part of the employing organization, they’re probably going to start cutting down on those because they’re just not quite as needed,” he said.

On Wednesday, U.S. Secretary of Labor Martin Walsh, speaking at Davos, bemoaned the state of U.S. immigration law, saying it denies the U.S. the workers it needs to drive economic growth.

“We need immigration reform in America. America has always been a country that has depended on immigration. The threat to the American economy long term is not inflation, it’s immigration,” he said. “It’s not having enough workers.”

Biden Urges Netherlands to Back Restrictions on Exporting Chip Tech to China

President Joe Biden hosted Dutch Prime Minister Mark Rutte on Tuesday at the White House, where he urged the Netherlands to support new U.S. restrictions on exporting chip-making technology to China, a key part of Washington’s strategy in its rivalry against Beijing.

During a brief appearance in front of reporters before their meeting, Biden said that he and Rutte have been working on “how to keep a free and open Indo-Pacific” to “meet the challenges of China.”

“Simply put, our companies, our countries have been so far just lockstep in what we’ve done in our investment to the future. So today, I look forward to discussing how we can further deepen our relationship and securing our supply chains to strengthen our transatlantic partnership,” he said.

ASML Holding NV, maker of the world’s most advanced semiconductor lithography systems, is headquartered in Veldhoven, making the Netherlands key to Washington’s chip push against Beijing. Ahead of Rutte’s visit, Dutch Trade Minister Liesje Schreinemacher said the Netherlands is consulting with European and Asian allies and will not automatically accept the new restrictions that the U.S. Commerce Department launched in October.

“You can’t say that they’ve been pressuring us for two years and now we have to sign on the dotted line. And we won’t,” she said.

Rutte did not mention the semiconductor issue ahead of his meeting with Biden, focusing instead on Russia’s invasion on Ukraine, where the NATO allies have been working together to support Kyiv.

“Let’s stay closely together this year,” Rutte said. “And hopefully, things will move forward in a way which is acceptable for Ukraine.”

China is one of ASML’s biggest clients. CEO Peter Wennink in October played down the impact of the U.S. export control regulations.

“Based on our initial assessment, the new restrictions do not amend the rules governing lithography equipment shipped by ASML out of the Netherlands and we expect the direct impact on ASML’s overall 2023 shipment plan to be limited,” he said.

Shoring up allies

Biden has been shoring up allies, including the Netherlands, Japan and South Korea — home to leading companies that play a critical role in the industry’s supply chain — to limit Beijing’s access to advanced semiconductors. Last week he hosted Japanese Prime Minister Fumio Kishida, who said he backs Biden’s attempt but did not agree to match the sweeping curbs targeting China’s semiconductor and supercomputing industries.

U.S. officials say export restrictions on chips are necessary because China can use semiconductors to advance their military systems, including weapons of mass destruction, and commit human rights abuses.

The October restrictions follow the U.S. Congress’ July passing of the CHIPS Act of 2022 to strengthen domestic semiconductor manufacturing, design and research, and reinforce America’s chip supply chains. The legislation also restricts companies that receive U.S. subsidies from investing in and expanding cutting edge chipmaking facilities in China.

Some information for this story came from AP.

Israel’s Cognyte Won Tender to Sell Spyware to Myanmar Before Coup, Documents Show

Israel’s Cognyte Software Ltd won a tender to sell intercept spyware to a Myanmar state-backed telecommunications firm a month before the Asian nation’s February 2021 military coup, according to documents reviewed by Reuters.

The deal was made even though Israel has claimed it stopped defense technology transfers to Myanmar following a 2017 ruling by Israel’s Supreme Court, according to a legal complaint recently filed with Israel’s attorney general and disclosed Sunday.

While the ruling was subjected to a rare gag order at the request of the state and media cannot cite the verdict, Israel’s government has publicly stated on numerous occasions that defense exports to Myanmar are banned.

The complaint, led by high-profile Israeli human rights lawyer Eitay Mack who spearheaded the campaign for the Supreme Court ruling, calls for a criminal investigation into the deal.

It accuses Cognyte and unnamed defense and foreign ministry officials who supervise such deals of “aiding and abetting crimes against humanity in Myanmar.”

The complaint was filed on behalf of more than 60 Israelis, including a former speaker of the house as well as prominent activists, academics and writers.

The documents about the deal, provided to Reuters and Mack by activist group Justice for Myanmar, are a January 2021 letter with attachments from Myanmar Posts and Telecommunications (MPT) to local regulators that list Cognyte as the winning vendor for intercept technology and note the purchase order was issued “by 30th Dec 2020.”

Intercept spyware can give authorities the power to listen in on calls, view text messages and web traffic including emails, and track the locations of users without the assistance of telecom and internet firms.

Representatives for Cognyte, Myanmar’s military government and MPT did not respond to multiple Reuters requests for comment. Japan’s KDDI Corp and Sumitomo Corp, which have stakes in MPT, declined to comment, saying they were not privy to details on communication interception.

Israel’s attorney general did not respond to requests for comment about the complaint. The foreign affairs ministry did not respond to requests for comment about the deal, while the defense ministry declined to comment.

Two people with knowledge of Myanmar’s intercept plans separately told Reuters the Cognyte system was tested by MPT.

They declined to be identified for fear of retribution by Myanmar’s junta.

MPT uses intercept spyware, a source with direct knowledge of the matter and three people briefed on the issue told Reuters although they did not identify the vendor. Reuters was unable to determine whether the sale of Cognyte intercept technology to MPT was finalized.

Even before the coup, public concern had mounted in Israel about the country’s defense exports to Myanmar after a brutal 2017 crackdown by the military on the country’s Rohingya population while Aung San Suu Kyi’s government was in power. The crackdown prompted the petition led by Mack that asked the Supreme Court to ban arms exports to Myanmar.

Since the coup, the junta has killed thousands of people including many political opponents, according to the United Nations.

Cognyte under fire

Many governments around the world allow for what are commonly called “lawful intercepts” to be used by law enforcement agencies to catch criminals but the technology is not ordinarily employed without any kind of legal process, cybersecurity experts have said.

According to industry executives and activists previously interviewed by Reuters, Myanmar’s junta is using invasive telecoms spyware without legal safeguards to protect human rights.

Mack said Cognyte’s participation in the tender contradicts statements made by Israeli officials after the Supreme court ruling that no security exports had been made to Myanmar.

While intercept spyware is typically described as “dual-use” technology for civilian and defense purposes, Israeli law states that “dual-use” technology is classified as defense equipment.

Israeli law also requires companies exporting defense-related products to seek licenses for export and marketing when doing deals. The legal complaint said any officials who granted Cognyte licenses for Myanmar deals should be investigated. Reuters was unable to determine whether Cognyte obtained such licenses.

Around the time of the 2020 deal, the political situation in Myanmar was tense with the military disputing the results of an election won by Suu Kyi.

Norway’s Telenor, previously one of the biggest telecoms firms in Myanmar before withdrawing from the country last year, also said in a Dec. 3, 2020 briefing and statement that it was concerned about Myanmar authorities’ plans for a lawful intercept due to insufficient legal safeguards.

Nasdaq-listed Cognyte was spun off in February 2021 from Verint Systems Inc, a pioneering giant in Israel’s cybersecurity industry.

Cognyte, which had $474 million in annual revenue for its last financial year, was also banned from Facebook in 2021.

Facebook owner Meta Platforms Inc said in a report Cognyte “enables managing fake accounts across social media platforms.”

Meta said its investigation identified Cognyte customers in a range of countries such as Kenya, Mexico and Indonesia and their targets included journalists and politicians. It did not identify the customers or the targets.

Meta did not respond to a request for further comment.

Norway’s sovereign wealth fund last month dropped Cognyte from its portfolio, saying states said to be customers of its surveillance products and services “have been accused of extremely serious human rights violations.” The fund did not name any states.

Cognyte has not responded publicly to the claims made by Meta or Norway’s sovereign wealth fund.

Fight Over Big Tech Looms in US Supreme Court

An upcoming U.S. Supreme Court case that asks whether tech firms can be held liable for damages related to algorithmically generated content recommendations has the ability to “upend the internet,” according to a brief filed by Google this week.

The case, Gonzalez v. Google LLC, is a long-awaited opportunity for the high court to weigh in on interpretations of Section 230 of the Communications Decency Act of 1996. A provision of federal law that has come under fire from across the political spectrum, Section 230 shields technology firms from liability for content published by third parties on their platforms, but also allows those same firms to curate or bar certain content.

The case arises from a complaint by Reynaldo Gonzalez, whose daughter was killed in an attack by members of the terror group ISIS in Paris in 2015. Gonzales argues that Google helped ISIS recruit members because YouTube, the online video hosting service owned by Google, used a video recommendation algorithm that suggested videos published by ISIS to individuals who displayed interest in the group.

Gonzalez’s complaint argues that by recommending content, YouTube went beyond simply providing a platform for ISIS videos, and should therefore be held accountable for their effects.

Dystopia warning

The case has garnered the attention of a multitude of interested parties, including free speech advocates who want tech firms’ liability shield left largely intact. Others argue that because tech firms take affirmative steps to keep certain content off their platforms, their claims to be simple conduits of information ring hollow, and that they should therefore be liable for the material they publish.

In its brief, Google painted a dire picture of what might happen if the latter interpretation were to prevail, arguing that it “would turn the internet into a dystopia where providers would face legal pressure to censor any objectionable content. Some might comply; others might seek to evade liability by shutting their eyes and leaving up everything, no matter how objectionable.”

Not everyone shares Google’s concern.

“Actually all it would do is make it so that Google and other tech companies have to follow the law just like everybody else,” Megan Iorio, senior counsel for the Electronic Privacy Information Center, told VOA.

“Things are not so great on the internet for certain groups of people right now because of Section 230,” said Iorio, whose organization filed a friend of the court brief in the case. “Section 230 makes it so that tech companies don’t have to respond when somebody tells them that non-consensual pornography has been posted on their site and keeps on proliferating. They don’t have to take down other things that a court has found violate the person’s privacy rights. So you know, to [say] that returning Section 230 to its original understanding is going to create a hellscape is hyperbolic.”

Unpredictable effects

Experts said the Supreme Court might try to chart a narrow course that leaves some protections intact for tech firms, but allows liability for recommendations. However, because of the prevalence of algorithmic recommendations on the internet, the only available method to organize the dizzying array of content available online, any ruling that affects them could have a significant impact.

“It has pretty profound implications, because with tech regulation and tech law, things can have unintended consequences,” John Villasenor, a professor of engineering and law and director of the UCLA Institute for Technology, Law and Policy, told VOA.

“The challenge is that even a narrow ruling, for example, holding that targeted recommendations are not protected, would have all sorts of very complicated downstream consequences,” Villasenor said. “If it’s the case that targeted recommendations aren’t protected under the liability shield, then is it also true that search results that are in some sense customized to a particular user are also unprotected?”

26 words

The key language in Section 230 has been called, “the 26 words that created the internet.” That section reads as follows:

“No provider or user of an interactive computer service shall be treated as the publisher of or speaker of information provided by another information content provider.”

At the time the law was drafted in the 1990s, people around the world were flocking to an internet that was still in its infancy. It was an open question whether an internet platform that gave individual third parties the ability to post content on them, such as a bulletin board service, was legally liable for that content.

Recognizing that a patchwork of state-level libel and defamation laws could leave developing internet companies exposed to crippling lawsuits, Congress drafted language meant to shield them. That protection is credited by many for the fact that U.S. tech firms, particularly in Silicon Valley, rose to dominance on the internet in the 21st century.

Because of the global reach of U.S. technology firms, the ruling in Gonzalez v. Google LLC is likely to echo far beyond the United States when it is handed down.

Legal groundwork

The groundwork for the Supreme Court’s decision to take the case was laid in 2020, when Justice Clarence Thomas wrote in response to an appeal that, “in an appropriate case, we should consider whether the text of this increasingly important statute aligns with the current state of immunity enjoyed by internet platforms.”

That statement by Thomas, arguably the court’s most conservative member, heartened many on the right who are concerned that “Big Tech” firms enjoy too much cultural power in the U.S., including the ability to deny a platform to individuals with whose views they disagree.

Gonzalez v. Google LLC is remarkable in that many cases that make it to the Supreme Court do so in part because lower courts have issued conflicting decisions, requiring an authoritative ruling from the high court to provide legal clarity.

Gonzalez’s case, however, has been dismissed by two lower courts, both of which held that Section 230 rendered Google immune from the suit.

Conservative concerns

Politicians have been calling for reform of Section 230 for years, with both Republicans and Democrats joining the chorus, though frequently for different reasons.

Former President Donald Trump regularly railed against large technology firms, threatening to use the federal government to rein them in, especially when he believed that they were preventing him or his supporters from getting their messages out to the public.

His concern became particularly intense during the early years of the COVID-19 pandemic, when technology firms began working to limit the spread of social media accounts that featured misinformation about the virus and the safety of vaccinations.

Trump was eventually kicked off Twitter and Facebook after using those platforms to spread false claims about the 2020 presidential election, which he lost, and to help organize a rally that preceded the assault on the U.S. Capitol on January 6, 2021.

Major figures in the Republican Party are active in the Gonzalez case. Missouri Senator Josh Hawley and Texas Senator Ted Cruz have both submitted briefs in the case urging the court to crack down on Google and large tech firms in general.

“Confident in their ability to dodge liability, platforms have not been shy about restricting access and removing content based on the politics of the speaker, an issue that has persistently arisen as Big Tech companies censor and remove content espousing conservative political views,” Cruz writes.

Biden calls for reform

Section 230 criticism has come from both sides of the aisle. On Wednesday, President Joe Biden published an essay in The Wall Street Journal urging “Democrats and Republicans to come together to pass strong bipartisan legislation to hold Big Tech accountable.”

Biden argues for a number of reforms, including improved privacy protections for individuals, especially children, and more robust competition, but he leaves little doubt about what he sees as a need for Section 230 reform.

“[W]e need Big Tech companies to take responsibility for the content they spread and the algorithms they use,” he writes. “That’s why I’ve long said we must fundamentally reform Section 230 of the Communications Decency Act, which protects tech companies from legal responsibility for content posted on their sites.”

Report: Iran May Be Using Facial Recognition Technology to Police Hijab Law

A recently published report in a U.S.-based magazine says Iran is likely using facial recognition technology to monitor women’s compliance with the country’s hijab law.

While there are other ways people can be identified, Wired magazine says Iran’s apparent use of facial recognition technology against women is “perhaps the first known instance of a government using face recognition to impose dress law on women based on religious belief.”

Iran announced late last year that it would begin to use recognition technology to monitor its women.

Wired said that since the protests that have erupted across Iran following the death of a young women who was arrested for wearing her headscarf improperly, Iranian women are reporting that they are being arrested for hijab infractions a day or two after attending protests, even though they had no interaction with police during the protests.

Tiandy, a Chinese company blacklisted by the U.S., is a likely provider of facial recognition technology to Iran, although neither it nor Iranian officials responded to a request for comment from Wired.

The company has in the past listed the Iran Revolutionary Guard Corp and other Iranian police and government agencies as customers. Tiandy also boasted on its website that its technology has helped China identify the country’s ethnic minorities, including Uyghurs.

Journalists Say Elon Musk Needs to Reinstitute Monitoring of Twitter

Concerns linger over Twitter’s stance on free expression and safety since Elon Musk took over the platform in a $44 billion deal.

Since taking ownership in late October, Musk has instituted changes including dissolving an oversight review channel, laying off a large portion of the team focused on combating misinformation, and suspending the accounts of several U.S. journalists.

Two media advocacy groups on Wednesday called on Musk to reverse course and implement policies to protect the right to legitimate information and press freedom.

In a joint letter to Twitter, Reporters Without Borders (RSF) and the Committee to Protect Journalists (CPJ) voiced “alarm” that Musk had undermined the legitimacy of Twitter by dissolving the site’s oversight review panel that checked postings for their truthfulness and laying off the majority of Twitter staff who helped combat misinformation.

The journalists’ groups also criticized Musk for “arbitrarily reinstating the accounts of nefarious actors, including known spreaders of misinformation,” and its suspension of several reporters, including VOA’s chief national correspondent, Steve Herman.

“Twitter’s policies should be crafted and communicated in a transparent manner … not arbitrarily or based on the company leadership’s personal preferences, perceptions and frustrations,” said the two organizations.

The groups also said Musk should reinstate Twitter’s Trust and Safety Council to review content posted on the site and better monitor attempts to censor information and penalize some individuals, including many journalists.

“Transparency and democratic safeguards must replace Musk’s capricious, arbitrary decision-making,” said Christophe Deloire, secretary-general of RSF.

In December, Twitter notified members of the Trust and Safety Council that the advisory group had been dissolved.

The email to the group said Twitter would work with partners through smaller meetings and regional contacts, said CPJ, a media rights organization that was a member of the council along with RSF.

“Mechanisms such as the Trust and Safety Council help platforms like Twitter to understand how to address harm and counter behavior that targets journalists,” CPJ President Jodie Ginsberg said in a statement. “Safety online can mean survival offline.”

Twitter also has continued its suspension of some journalists, saying it will restore their accounts only if certain posts are deleted.

Those suspended had tweeted about @ElonJet, an account that uses publicly available data to report on Musk’s private jet. That account was also suspended.

Musk had said on Twitter that the @Elonjet account and any accounts that linked to it were suspended because they violated Twitter’s anti-doxxing policy.

Doxxing is maliciously publishing a person’s private or identifying information — such a phone number or address — on the internet.

The @Elonjet Twitter account, however, used publicly available data. Additionally, none of the journalists who had tweeted about Musk and his shutdown of the account had tweeted location information for his plane. They did report that the @Elonjet account had moved to another platform and named the platform.

Some of the journalists have had their accounts restored after removing content. But VOA’s Herman is still suspended from the platform after refusing to remove tweets.

The veteran correspondent said he was notified this week that his appeal against the permanent suspension was denied. The reason: violating rules against “posting private information.”

Before the account was suspended, Herman had more than 111,000 followers.

“Based on what Musk has previously tweeted and recent media reports, I have concerns that if I don’t give into the demand to delete several posts and reactivate @W7VOA, my Twitter account will eventually be deleted for inactivity or auctioned off,” he told VOA.

Herman, like other journalists, migrated to other social media platforms including Mastodon, where he gained 40,000 followers. But, he said, “Neither platform has yet to achieve critical mass and thus the influence of Twitter, especially for journalists and policymakers.”