In this week’s edition, Russia blames Ukraine for a drone incident at the Kremlin, big tech CEOs meet with Vice President Kamala Harris, Samsung bans employees from using ChatGPT, and climate misinformation is still monetized on YouTube.

Industrial Policy & International Security

Russian Government Says Kremlin Hit by Ukraine Drones, Blames Kyiv | Wall Street Journal

On Wednesday, two drones crashed into the Kremlin, and Russian government officials have blamed Ukraine for the incident. However, Ukrainian officials have officially denied these allegations. Mykhailo Podolyak, an advisor to Ukrainian President Volodymyr Zelensky, has suggested that domestic opponents of Russian President Vladimir Putin may have been responsible for the attack. White House officials have denied any involvement in the incident and emphasized that the US does not support any attacks by Ukraine outside of its borders. Although White House Press Secretary Karine Jean-Pierre did not directly accuse Russia of staging the attack as a means to generate more political support for their offensive against Ukraine, she did mention that Russia has a history of carrying out false flag operations. The incident precedes planned Victory Day celebrations scheduled in the Red Square on May 9th. Kremlin officials reserved the right to take retaliatory measures.

China Still a ‘Huge Market’ for US Chip Companies Despite Risks | Bloomberg

The Semiconductor Industry Association, a major US semiconductor lobbying group, is pushing for clear rules from the Biden administration to allow American semiconductor companies access to the Chinese market despite national security concerns. The group wants to ensure that any proposed rules are well-defined, transparent, and predictable to reduce uncertainty for companies in the industry. The US government is expected to propose rules about what kinds of investments chip companies can make in China that may preclude them from receiving funding from the Chips and Science Act.

US Regulation

Google, Microsoft CEOs called to AI meeting at White House | Reuters

CEOs from Google, Microsoft, OpenAI, and Anthropic are scheduled to meet with Vice President Kamala Harris and other top officials to discuss concerns around AI technology. President Joe Biden has called for companies to ensure that their products are safe before releasing them to the public. The Biden administration has sought public comments on accountability measures for AI systems as concerns grow about its impact on national security and education. And the White House has expressed concerns about the risk AI poses to workers. The meeting will be attended by top officials such as Biden's Chief of Staff, National Security Adviser, and Secretary of Commerce, among others. Notably, Elon Musk has called for greater government oversight of AI, as he believes it is “a danger to the public.”

Meta faces new restrictions over FTC allegations Facebook violated kids’ privacy rules | The Hill

The Federal Trade Commission (FTC) has proposed new restrictions against Meta over concerns about the company's policies toward data privacy and the security of minors. The FTC found that Meta violated the Children’s Online Privacy Protection Act by allegedly misrepresenting that minors using Meta’s Messenger Kids app would only be able to communicate with contacts approved by their parents. According to the FTC, children were able to communicate with unapproved contacts in group text chats and group video calls. Meta representatives called the allegations a "political stunt” and claimed that Meta publicly disclosed and resolved the incidents three years prior. The discovery of the new violations were part of a review of Meta’s compliance with a previous 2020 order by the FTC, which required the social media company to pay $5 billion and expand its privacy program. The agency updated the 2020 order to include new restrictions on collecting data from minors, prohibiting commercial gain from the data of children and teens under eighteen, and obtaining users' affirmative consent for any future uses of facial recognition technology.

Innovation

Brain scans can translate a person’s thoughts into words | MIT Technology Review

Researchers from the University of Texas at Austin have developed a non-invasive brain-computer interface that may one day be capable of converting a person's thoughts into words. The technology could help people who have lost the ability to speak due to conditions such as strokes or ALS. The model was trained on functional magnetic resonance imaging (fMRI) scans of three volunteers as they listened to sixteen hours of podcast content and could eventually predict whole sentences a participant was hearing with surprising accuracy. The team used GPT-1, a large language model developed by OpenAI, trained on a dataset of English sentences sourced from Reddit, podcasts, and The Moth Radio Hour. However, the algorithm performed only “barely above chance” when decoding brain scans of other people. For now, the technology requires that participants be willing to train the decoder for it to effectively work. 

But experts are beginning to raise ethical questions around privacy as they believe that nobody's brain should be decoded without their cooperation.

Cyber

Samsung Bans Staff’s AI Use After Spotting ChatGPT Data Leak | Bloomberg

Samsung has banned the use of generative AI tools, including ChatGPT, on company-owned computers, tablets, phones, and internal networks due to security concerns posed by the external storage of transmitted data. The move comes after the company discovered that employees had uploaded sensitive code to such platforms, including Google Bard and Bing. The company warned employees that violating the new policy may result in disciplinary action, up to and including termination of employment. In an internal survey, 65 percent of respondents expressed concerns about the security risks posed by employee use of AI tools. As a solution, Samsung is developing its own internal AI tools for translation and document summarizing, and is exploring ways to prevent sensitive information from being uploaded to external services. ChatGPT is also deploying methods to address these types of concerns. Last month, the service added “incognito mode” to allow users to block their data from being used for AI model training.

State & Local Tech Ecosystems

Fire Sale: $300 Million San Francisco Office Tower, Mostly Empty. Open to Offers. | The Wall Street Journal

A San Francisco office building on California Street is up for sale at $60 million, representing an 80 percent decrease from its 2019 value of $300 million. The building’s largest tenant, Union Bank, has mostly vacated, resulting in a 75 percent vacancy rate. San Francisco's real estate market has suffered due to the city's high cost of living, quality-of-life issues such as crime and homelessness, and the tech industry's increasing adoption of remote and hybrid work models. Even large tech companies like Salesforce and Meta Platforms are struggling to fill their office space and are turning to subletting. With fewer workers in the office, many restaurants and small businesses have laid off employees or closed permanently. While approximately 30 percent of San Francisco's office space is vacant, investors in artificial intelligence continue to be interested in office space. This trend is also impacting the city’s revenue from property taxes and San Francisco is facing a budget shortfall of $780 million. Mayor London Breed requested department heads prepare for up to 13 percent budget cuts over the next two years.

Democracy Online

Google still profits from climate lies on YouTube | The Verge

Google has violated its own policy by running ads on YouTube videos that promote climate change disinformation, according to the Climate Action Against Disinformation (CAAD) coalition. Over a year ago, Google pledged to not monetize content that contradicts mainstream climate science, but CAAD identified one hundred such videos, with an additional one hundred videos promoting ineffective methods to counteract or solve climate change. CAAD believes this type of information could delay legitimate climate action and advocates for a wider definition of disinformation to limit its spread. In response, Google reviewed the videos from the CAAD dataset and removed ads from content in violation of company policies. Google policy communications manager Michael Aciman shared that “our enforcement is not always perfect, and we are constantly working to improve our systems.”

Expand
overlay image