MIT Technology Review Fri, 12 Jan 2024 08:50:36 +0000 en-US hourly 1,33px,1272px,716px&w=32px MIT Technology Review 32 32 The innovation that gets an Alzheimer’s drug through the blood-brain barrier Fri, 12 Jan 2024 09:00:00 +0000 This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Therapies to treat brain diseases share a common problem: they struggle to reach their target. The blood vessels that permeate the brain have a special lining so tightly packed with cells that only very tiny molecules can pass through. This blood-brain barrier “acts as a seal,” protecting the brain from toxins or other harmful substances, says Anne Eichmann, a molecular biologist at Yale. But it also keeps most medicines out. Researchers have been working on methods to sneak drugs past the blood-brain barrier for decades. And their hard work is finally beginning to pay off.

Last week, researchers at the West Virginia University Rockefeller Neuroscience Institute reported that by using focused ultrasound to open the blood-brain barrier, they

improved delivery of a new Alzheimer’s treatment and sped up clearance of the sticky plaques that are thought to contribute to some of the cognitive and memory problems in people with Alzheimer’s by 32%.

For this issue of The Checkup, we’ll explore some of the ways scientists are trying to disrupt the blood-brain barrier.

A patient surrounded by a medical team lays on the bed of an MRI machine with their head in a special focused ultrasound helmet
An Alzheimer’s patient undergoes focused ultrasound treatment with the WVU RNI team.

In the West Virginia study, three people with mild Alzheimer’s received monthly doses of aducanumab, a lab-made antibody that is delivered via IV. This drug, first approved in 2021,  helps clear away beta-amyloid, a protein fragment that clumps up in the brains of people with Alzheimer’s disease. (The drug’s approval was controversial, and it’s still not clear whether it actually slows progression of the disease.)  After the infusion, the researchers treated specific regions of the patients’ brains with focused ultrasound, but just on one side. That allowed them to use the other half of the brain as a control. PET scans revealed a greater reduction in amyloid plaques in the ultrasound-treated regions than in those same regions on the untreated side of the brain, suggesting that more of the antibody was getting into the brain on the treated side.

Aducanumab does clear plaques without ultrasound, but it takes a long time, perhaps in part because the antibody has trouble entering the brain. “Instead of using the therapy intravenously for 18 to 24 months to see the plaque reduction, we want to see if we can achieve that reduction in a few months,” says Ali Rezai, a neurosurgeon at West Virginia University Rockefeller Neuroscience Institute and an author of the new study. Cutting the amount of time needed to clear these plaques might help slow the memory loss and cognitive problems that define the disease.

The device used to target and deliver the ultrasound waves, developed by a company called Insightec, consists of an MRI machine and a helmet studded with ultrasound transducers. It’s FDA approved, but for an entirely different purpose: to help stop tremors in people with Parkinson’s by creating lesions in the brain. To open the blood-brain barrier, “we inject individuals intravenously with microbubbles,” Rezai says. These tiny gas bubbles, commonly used as a contrast agent, travel through the bloodstream. Using the MRI, the researchers can aim the ultrasound waves at very specific parts of the brain “with millimeter precision,” Rezai says. When the waves hit the microbubbles, the bubbles begin to expand and contract, physically pushing apart the tightly packed cells that line the brain’s capillaries. “This temporary opening can last up to 48 hours, which means that during those 48 hours, you can have increased penetration into the brain of therapeutics,” he says.

Focused ultrasound has been explored as a method for opening the blood-brain barrier for years. (We wrote about this technology way back in 2006.) But this is the first time it has been combined with an Alzheimer’s therapy and tested in humans.

The proof-of-concept study was too small to look at efficacy, but Rezai and his team are planning to continue their work. The next step is to repeat the study in five people with one of the newer anti-amyloid antibodies, lecanemab. Not only does that drug clear plaque, but one study showed that it slowed disease progression by about 30% after 18 months of treatment in patients with early Alzheimer’s symptoms. That’s a modest amount, but a major success in a field that has struggled with repeated failures. 

Eichmann, who is also working on disrupting the blood-brain barrier, says the new results using focused ultrasound are exciting. But she wonders about long-term effects of the technique. “I guess it remains to be seen whether over time, upon repeated use, this would be damaging to the blood-brain barrier,” she says.

Other strategies for opening the blood-brain barrier look promising too. Rather than mechanically pushing the barrier apart, Roche, a pharmaceutical company, has developed a technology called “Brainshuttle” that ferries drugs across it by binding to receptors on the cells that line the vessel walls.

The company has linked Brainshuttle to its own anti-amyloid antibody, gantenerumab, and is testing it in 44 people with Alzheimer’s. At a conference in October, researchers presented initial results. The highest dose completely wiped out plaque in three of four participants. The biotech company Denali Therapeutics is working on a similar strategy to tackle Parkinson’s and other neurodegenerative diseases..   

Eichmann is working on a different strategy. Her team is testing an antibody that binds to a receptor that is important for maintaining the integrity of the blood-brain barrier. By blocking that receptor, they can temporarily loosen the junctions between cells, at least in lab mice.

Other groups are targeting different receptors, exploring various viral vectors, or developing nanoparticles that can slip into the brain. 

All these strategies will have different advantages and drawbacks, and it isn’t yet clear which will be safest and most effective. But Eichmann thinks some strategy is likely to be approved in the coming years: “We are indeed getting close.”

Techniques to open the blood-brain barrier could be useful in a whole host of diseases—Alzheimer’s, but also Parkinson’s disease, ALS, and brain tumors. “This really opens up a whole array of potential opportunities,” Rezai says. “It’s an exciting time.”

Read more from MIT Technology Review’s archive

Until recently, drug development in Alzheimer’s had been a dismal pursuit, marked by repeated failures. In 2017, Emily Mullin looked at how failures of some of the anti-amyloid drugs had researchers questioning whether amyloid is really the problem in Alzheimer’s. 

In 2016, Ryan Cross covered one of the first efforts to use ultrasound to open the blood-brain barrier in humans, a trial to deliver chemotherapy to patients with recurrent brain tumors. That same year, Antonio Regalado reported some of the first exciting results of the Alzheimer’s drug aducanumab. 

From around the web

Bayer’s non-hormonal drug to treat hot flashes reduced their frequency and intensity and improved sleep and quality of life. These results, coupled with other recent advances in treatment for symptoms of menopause, are a sign that these long-neglected issues have become big business. (Stat)

Covid is surging. Wastewater data is the best way we have to measure the virus’s ebb and flow, but it’s far from perfect. (NYT)

Last week the FDA approved Florida’s request to import drugs from Canada to cut costs. The pharmaceutical industry is not thrilled. (Reuters) Neither is Canada. (Ars Technica

The Download: enhanced geothermal systems, and promising climate tech Thu, 11 Jan 2024 13:10:00 +0000 This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Enhanced geothermal systems: 10 Breakthrough Technologies 2024

Geothermal heat, an abundant and carbon-­free energy source, offers an alternative to fossil fuels that doesn’t vary with the weather or time of day. However, conventional geothermal plants require specific geological conditions—in particular, permeable rocks with water sources.

Because of this, geothermal accounts for less than 1% of global renewables capacity. But an emerging technology could let us exploit even more of the heat beneath our feet.

Enhanced geothermal systems allow companies to access geothermal heat in new locations, cracking open relatively solid rocks at depths much greater than existing geothermal wells. Water is then injected into these rocks to generate steam, which subsequently drives turbines to produce electricity. But the technology is not without potential risks. Read the full story.

—June Kim

Enhanced geothermal systems is one of MIT Technology Review’s 10 Breakthrough Technologies for 2024. Check out the rest of the list and vote for the final 11th breakthrough—we’ll reveal the winner in April.

Three climate technologies breaking through in 2024

Climate tech is big news for 2024. So much so, it makes up not one, not two, but three entries in this year’s 10 Breakthrough Technologies, MIT Technology Review’s list of the tech that’s changing our world.

Our climate reporter Casey Crownhart has written a handy guide to why these technologies are so noteworthy, and what you need to know about them. Read the full story.

This story is from The Spark, our weekly climate and energy newsletter. Sign up to receive it in your inbox every Wednesday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 US regulators have approved a new bitcoin-tracking product after all
Just hours after saying a tweet confirming the news was the result of a hack. (NYT $)
+ The approval of bitcoin exchange traded funds is a major boon for crypto. (Reuters)
+ A goldrush for investors could be on the horizon. (Wired $)
+ Funding for crypto startups is still in the doldrums, though. (Bloomberg $)

2 OpenAI’s GPT Store is live
Visitors can both buy and sell customized chatbots through it. (The Guardian)
+ The bots can be as simple or complex as their creators desire. (TechCrunch)
+ What’s next for OpenAI. (MIT Technology Review)

3 America’s EV charging network just got a major boost
To the tune of $623 million to build 7,5000 new charge points. (Wired $)
+ Why getting more EVs on the road is all about charging. (MIT Technology Review)

4 How Mark Zuckerberg fell out of love with the metaverse
And pivoted fully to AI. (Bloomberg $) 

5 We don’t have proper tools to detect plagiarism
AI checkers are trained to look for matching text—not plagiarism itself. (The Markup)
+ AI-text detection tools are really easy to fool. (MIT Technology Review)

6 The US is on the verge of a dangerous vaccination tipping point
Misinformation online is the driving force behind falling vaccination rates for numerous conditions, regulators warn. (Ars Technica)

7 Meta’s content moderators are fighting for their rights
After being forced to watch online atrocities, they were sacked without warning. (FT $)
+ How an undercover content moderator polices the metaverse. (MIT Technology Review)

8 Gadgets that used to be ‘smart’ are ‘AI-powered’ now
As ever, clever marketing is no guarantee they’ll be successful—but it doesn’t hurt. (WSJ $)
+ AI binoculars for identifying birds? Take my money! (The Verge)

9 TikTok has a bland fixation with performative cleanliness
It’s a reminder that excessive oversanitization can cause more harm than good. (Vox)
+ Things are getting crazy over in TikTok Shop. (NY Mag $)

10 There’s no shame in playing your video games on easy mode 🧟
Life’s too short to be mauled to death by zombies. (The Atlantic $)

Quote of the day

“I’m sure if Picasso or Matisse were still alive they would quit their job.”

—Renowned artist Ai Weiwei explains how even some of the world’s most-recognized artists would have had to rethink their approach if AI had existed in their era to the Guardian.

The big story

The rise of the tech ethics congregation

August 2023

Just before Christmas 2022, a pastor preached a gospel of morals over money to several hundred members of his flock. But the leader in question was not an ordained minister, nor even a religious man.

Polgar is the founder of All Tech Is Human, a nonprofit organization devoted to promoting ethics and responsibility in tech. His congregation is undergoing dramatic growth in an age when the life of the spirit often struggles to compete with cold, hard, capitalism.

Its leaders believe there are large numbers of individuals in and around the technology world, often from marginalized backgrounds, who wish tech focused less on profits and more on being a force for ethics and justice. But attempts to stay above the fray can cause more problems than they solve. Read the full story.

—Greg M. Epstein

We can still have nice things

A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or tweet ’em at me.)

+ How to fall in love with cottage cheese (even if you hate it).
+ The recipient of the most Razzie Awards, celebrating the worst in cinematic achievement? One Sylvester Stallone.
+ Here’s a good rundown of the albums we’re most looking forward to this year.
+ I had no idea it was possible to grow tea in Scotland, of all places.
+ Awards show speeches aren’t all dull—sometimes they can be downright hilarious.

Three climate technologies breaking through in 2024 Thu, 11 Jan 2024 11:00:00 +0000 This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Awards season is upon us, and I can’t get enough. Red-carpet fashion, host drama, heartwarming speeches—I love it all.

I caught the Golden Globes last weekend, and the Grammys and Oscars aren’t far off. But the best awards, in my humble opinion, are the 10 Breakthrough Technologies, MIT Technology Review’s list of the tech that’s changing our world. 

This year’s list dropped on Monday, and I’m delighted to share that not one, not two, but three climate tech items are featured. So for the newsletter this week, let’s take a look at a few of these award-winning technologies you need to know about. (And to honor awards season, I’ll also be assigning them to bonus—and completely unofficial—categories.)

Super-efficient solar cells

Winner: Best Supporting Actor

Solar panels are among the most important, and perhaps the most recognizable, tools to address climate change. But one next-generation solar technology could help solar power get even more efficient, and cheaper: perovskite tandem solar cells. 

Most solar cells use silicon to soak up sunlight and transform it into electricity. But other materials can do this job too, including perovskites, a class of crystalline materials. And because perovskites and silicon absorb different wavelengths of light, the two materials can be stacked like a sandwich to make one super-efficient cell. 

Because of their outstanding support for traditional silicon solar materials, super-efficient perovskite tandem cells are my winner for this year’s Best Supporting Actor award. 

There are definitely barriers to commercializing this technology: perovskites are tricky to manufacture and have historically degraded quickly outside in the elements. But some companies say they’re closer than ever to using the materials to transform commercial solar. Read more about the technology here

Enhanced geothermal systems

Winner: Best New Artist

Sucking heat out from the earth is one of the oldest tricks in the book—there’s evidence that humans were using hot springs for heat more than 10,000 years ago. 

We’ve since leveled up, using geothermal energy to produce electricity. But a specific set of factors is needed to harness the energy radiating out of the planet’s core: heat close to the surface, permeable rock, and underground fluid. 

This narrows the potential sites for usable geothermal energy significantly, so a growing number of projects are working to widen access with so-called enhanced geothermal systems. 

An enhanced geothermal system is essentially a human-created geothermal energy source. This often involves drilling down into rock and pumping fluid into it to open up fractures. We’ve seen some recent progress in this field from a handful of companies, including Fervo Energy, which started up a massive pilot facility in 2023 (and made our list of 15 Climate Tech Companies to Watch). 

Because of its spirit of reinvention and innovation, enhanced geothermal systems are my pick for this year’s Best New Artist Award. 

Some of the biggest projects coming are still a few years from coming online, and it could be tough to scale construction on these plants in some places. But enhanced geothermal is definitely a field to keep an eye on. Read more in my colleague June Kim’s write-up here

Heat pumps

Winner: Lifetime Achievement

Last, but certainly not least, we have the venerable heat pump. These devices, which can cool and heat using electricity, are a personal favorite climate technology of mine. 

Heat pumps are super efficient, sometimes almost seeming to defy the laws of physics. They don’t really break any laws, physical or otherwise, as I outlined in a deep dive into how the technology works last year.

While they’re not exactly new, heat pumps are definitely breaking through in a new way. The technology outsold gas furnaces for the first time in the US last year, and sales have been climbing around the world. Globally, heat pumps have the potential to cut emissions by 500 million metric tons in 2030—as much as pulling all the cars in Europe today off the roads. 

For their long-standing and ongoing contributions to decarbonization, heat pumps are my choice for this year’s Lifetime Achievement Award. 

It’s going to be tough to get heat pumps into all the places they need to go to meet climate goals. For more on all things heat pumps, check out my write-up here. 

Congratulations to all our winners! Be sure to check out the rest of the list. It includes everything from wearable headsets to innovative new CRISPR treatments. 

And if you’d like to weigh in on one more award, you can vote for our reader-chosen 11th breakthrough technology here. The candidates are some of the other items we considered for the list. I don’t want to unfairly influence you, but you know my heart always goes with batteries, so feel free to vote accordingly …  

Related reading

Technology is always changing. Don’t miss our list of the technologies breaking through in 2024.

Perovskites were supposed to change the solar world. What’s the holdup?

This startup showed that its underground wells can be used as a massive battery.

Everything you need to know about the wild world of heat pumps.

Another thing

an Orsted wind turbine off the coast of Block Island

It’s been a turbulent time for offshore wind power. Projects are getting delayed and canceled left and right, it seems. 

In 2024, some big moments could determine whether these troubles are more of a bump in the road or a sign of more serious issues. For everything you should watch out for in offshore wind, check out my latest story here.

Keeping up with climate  

It’s officially official—2023 was the hottest year on record, according to the EU’s climate service. Check out the details and some stunning graphics on the record-breaking year. (BBC)

A national lab in California made waves in late 2022 when it achieved a huge milestone for fusion research. You may not know that the facility actually had a massive fusion reactor in the 1980s that never got switched on. (MIT Technology Review)

→ Here’s what’s coming next for fusion research, according to the lab’s current director. (MIT Technology Review)

India is rushing to meet growing demand for electricity, and the country is turning to coal to do it. The government plans to roughly double coal production by 2030. (Bloomberg)

One person’s wastewater is another one’s … heat? New systems can harness the heat in wastewater to heat whole neighborhoods. (BBC)

Norway will open up parts of the Norwegian Sea for seabed mining exploration. The country joins nations including Japan, New Zealand, and Namibia that are considering allowing this new industry to operate in their waters. (New York Times)

→ Seabed mining could be a new source of materials for batteries. But environmentalists are worried about the potential harm. (MIT Technology Review)

Lack of charging infrastructure is a huge barrier to EV adoption. Here are three ways to encourage new chargers in charging deserts. (Canary Media)

Rising temperatures means beavers are moving north—and they’re causing trouble. Specifically, the rodents are creating a feedback loop that’s thawing the ground and disrupting ecosystems. (The Guardian)

Chinese automaker BYD is set to take the world by storm. The company sold more plug-in hybrids and EVs than Tesla did in 2023, and is set to continue its rapid growth this year. (Bloomberg)→ BYD was one of our climate tech companies to watch in 2023. (MIT Technology Review)

Deploying high-performance, energy-efficient AI Wed, 10 Jan 2024 15:03:00 +0000

Although AI is by no means a new technology there have been massive and rapid investments in it and large language models. However, the high-performance computing that powers these rapidly growing AI tools — and enables record automation and operational efficiency — also consumes a staggering amount of energy. With the proliferation of AI comes the responsibility to deploy that AI responsibly and with an eye to sustainability during hardware and software R&D as well as within data centers.

“Enterprises need to be very aware of the energy consumption of their digital technologies, how big it is, and how their decisions are affecting it,” says corporate vice president and general manager of data center platform engineering and architecture at Intel, Zane Ball.

One of the key drivers of a more sustainable AI is modularity, says Ball. Modularity breaks down subsystems of a server into standard building blocks, defining interfaces between those blocks so they can work together. This system can reduce the amount of embodied carbon in a server’s hardware components and allows for components of the overall ecosystem to be reused, subsequently reducing R&D investments.

Downsizing infrastructure within data centers, hardware, and software can also help enterprises reach greater energy efficiency without compromising function or performance. While very large AI models require megawatts of super compute power, smaller, fine-tuned models that operate within a specific knowledge domain can maintain high performance but low energy consumption.

“You give up that kind of amazing general purpose use like when you’re using ChatGPT-4 and you can ask it everything from 17th century Italian poetry to quantum mechanics, if you narrow your range, these smaller models can give you equivalent or better kind of capability, but at a tiny fraction of the energy consumption,” says Ball.

The opportunities for greater energy efficiency within AI deployment will only expand over the next three to five years. Ball forecasts significant hardware optimization strides, the rise of AI factories — facilities that train AI models on a large scale while modulating energy consumption based on its availability — as well as the continued growth of liquid cooling, driven by the need to cool the next generation of powerful AI innovations.

“I think making those solutions available to our customers is starting to open people’s eyes how energy efficient you can be while not really giving up a whole lot in terms of the AI use case that you’re looking for.”

This episode of Business Lab is produced in partnership with Intel.

Full Transcript

Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.

Our topic is building a better AI architecture. Going green isn’t for the faint of heart, but it’s also a pressing need for many, if not all enterprises. AI provides many opportunities for enterprises to make better decisions, so how can it also help them be greener?

Two words for you: sustainable AI.

My guest is Zane Ball, corporate vice president and general manager of data center platform engineering and architecture at Intel.

This podcast is produced in partnership with Intel.

Welcome Zane.

Zane Ball: Good morning.

Laurel: So to set the stage for our conversation, let’s start off with the big topic. As AI transforms businesses across industries, it brings the benefits of automation and operational efficiency, but that high-performance computing also consumes more energy. Could you give an overview of the current state of AI infrastructure and sustainability at the large enterprise level?

Zane: Absolutely. I think it helps to just kind of really zoom out big picture, and if you look at the history of IT services maybe in the last 15 years or so, obviously computing has been expanding at a very fast pace. And the good news about that history of the last 15 years or so, is while computing has been expanding fast, we’ve been able to contain the growth in energy consumption overall. There was a great study a couple of years ago in Science Magazine that talked about how compute had grown by maybe 550% over a decade, but that we had just increased electricity consumption by a few percent. So those kind of efficiency gains were really profound. So I think the way to kind of think about it is computing’s been expanding rapidly, and that of course creates all kinds of benefits in society, many of which reduce carbon emissions elsewhere.

But we’ve been able to do that without growing electricity consumption all that much. And that’s kind of been possible because of things like Moore’s Law, Big Silicon has been improving with every couple of years and make devices smaller, they consume less power, things get more efficient. That’s part of the story. Another big part of this story is the advent of these hyperscale data centers. So really, really large-scale computing facilities, finding all kinds of economies of scale and efficiencies, high utilization of hardware, not a lot of idle hardware sitting around. That also was a very meaningful energy efficiency. And then finally this development of virtualization, which allowed even more efficient utilization of hardware. So those three things together allowed us to kind of accomplish something really remarkable. And during that time, we also had AI starting to play, I think since about 2015, AI workloads started to play a pretty significant role in digital services of all kinds.

But then just about a year ago, ChatGPT happens and we have a non-linear shift in the environment and suddenly large language models, probably not news to anyone on this listening to this podcast, has pivoted to the center and there’s just a breakneck investment across the industry to build very, very fast. And what is also driving that is that not only is everyone rushing to take advantage of this amazing large language model kind of technology, but that technology itself is evolving very quickly. And in fact also quite well known, these models are growing in size at a rate of about 10x per year. So the amount of compute required is really sort of staggering. And when you think of all the digital services in the world now being infused with AI use cases with very large models, and then those models themselves growing 10x per year, we’re looking at something that’s not very similar to that last decade where our efficiency gains and our greater consumption were almost penciling out.

Now we’re looking at something I think that’s not going to pencil out. And we’re really facing a really significant growth in energy consumption in these digital services. And I think that’s concerning. And I think that means that we’ve got to take some strong actions across the industry to get on top of this. And I think just the very availability of electricity at this scale is going to be a key driver. But of course many companies have net-zero goals. And I think as we pivot into some of these AI use cases, we’ve got work to do to square all of that together.

Laurel: Yeah, as you mentioned, the challenges are trying to develop sustainable AI and making data centers more energy efficient. So could you describe what modularity is and how a modularity ecosystem can power a more sustainable AI?

Zane: Yes, I think over the last three or four years, there’ve been a number of initiatives. Intel’s played a big part of this as well of re-imagining how servers are engineered into modular components. And really modularity for servers is just exactly as it sounds. We break different subsystems of the server down into some standard building blocks, define some interfaces between those standard building blocks so that they can work together. And that has a number of advantages. Number one, from a sustainability point of view, it lowers the embodied carbon of those hardware components. Some of these hardware components are quite complex and very energy intensive to manufacture. So imagine a 30 layer circuit board, for example, is a pretty carbon intensive piece of hardware. I don’t want the entire system, if only a small part of it needs that kind of complexity. I can just pay the price of the complexity where I need it.

And by being intelligent about how we break up the design in different pieces, we bring that embodied carbon footprint down. The reuse of pieces also becomes possible. So when we upgrade a system, maybe to a new telemetry approach or a new security technology, there’s just a small circuit board that has to be replaced versus replacing the whole system. Or maybe a new microprocessor comes out and the processor module can be replaced without investing in new power supplies, new chassis, new everything. And so that circularity and reuse becomes a significant opportunity. And so that embodied carbon aspect, which is about 10% of carbon footprint in these data centers can be significantly improved. And another benefit of the modularity, aside from the sustainability, is it just brings R&D investment down. So if I’m going to develop a hundred different kinds of servers, if I can build those servers based on the very same building blocks just configured differently, I’m going to have to invest less money, less time. And that is a real driver of the move towards modularity as well.

Laurel: So what are some of those techniques and technologies like liquid cooling and ultrahigh dense compute that large enterprises can use to compute more efficiently? And what are their effects on water consumption, energy use, and overall performance as you were outlining earlier as well?

Zane: Yeah, those are two I think very important opportunities. And let’s just take them one at a  time. Emerging AI world, I think liquid cooling is probably one of the most important low hanging fruit opportunities. So in an air cooled data center, a tremendous amount of energy goes into fans and chillers and evaporative cooling systems. And that is actually a significant part. So if you move a data center to a fully liquid cooled solution, this is an opportunity of around 30% of energy consumption, which is sort of a wow number. I think people are often surprised just how much energy is burned. And if you walk into a data center, you almost need ear protection because it’s so loud and the hotter the components get, the higher the fan speeds get, and the more energy is being burned in the cooling side and liquid cooling takes a lot of that off the table.

What offsets that is liquid cooling is a bit complex. Not everyone is fully able to utilize it. There’s more upfront costs, but actually it saves money in the long run. So the total cost of ownership with liquid cooling is very favorable, and as we’re engineering new data centers from the ground up. Liquid cooling is a really exciting opportunity and I think the faster we can move to liquid cooling, the more energy that we can save. But it’s a complicated world out there. There’s a lot of different situations, a lot of different infrastructures to design around. So we shouldn’t trivialize how hard that is for an individual enterprise. One of the other benefits of liquid cooling is we get out of the business of evaporating water for cooling. A lot of North America data centers are in arid regions and use large quantities of water for evaporative cooling.

That is good from an energy consumption point of view, but the water consumption can be really extraordinary. I’ve seen numbers getting close to a trillion gallons of water per year in North America data centers alone. And then in humid climates like in Southeast Asia or eastern China for example, that evaporative cooling capability is not as effective and so much more energy is burned. And so if you really want to get to really aggressive energy efficiency numbers, you just can’t do it with evaporative cooling in those humid climates. And so those geographies are kind of the tip of the spear for moving into liquid cooling.

The other opportunity you mentioned was density and bringing higher and higher density of computing has been the trend for decades. That is effectively what Moore’s Law has been pushing us forward. And I think it’s just important to realize that’s not done yet. As much as we think about racks of GPUs and accelerators, we can still significantly improve energy consumption with higher and higher density traditional servers that allows us to pack what might’ve been a whole row of racks into a single rack of computing in the future. And those are substantial savings. And at Intel, we’ve announced we have an upcoming processor that has 288 CPU cores and 288 cores in a single package enables us to build racks with as many as 11,000 CPU cores. So the energy savings there is substantial, not just because those chips are very, very efficient, but because the amount of networking equipment and ancillary things around those systems is a lot less because you’re using those resources more efficiently with those very high dense components. So continuing, if perhaps even accelerating our path to this ultra-high dense kind of computing is going to help us get to the energy savings we need maybe to accommodate some of those larger models that are coming.

Laurel: Yeah, that definitely makes sense. And this is a good segue into this other part of it, which is how data centers and hardware as well software can collaborate to create greater energy efficient technology without compromising function. So how can enterprises invest in more energy efficient hardware such as hardware-aware software, and as you were mentioning earlier, large language models or LLMs with smaller downsized infrastructure but still reap the benefits of AI?

Zane: I think there are a lot of opportunities, and maybe the most exciting one that I see right now is that even as we’re pretty wowed and blown away by what these really large models are able to do, even though they require tens of megawatts of super compute power to do, you can actually get a lot of those benefits with far smaller models as long as you’re content to operate them within some specific knowledge domain. So we’ve often referred to these as expert models. So take for example an open source model like the Llama 2 that Meta produced. So there’s like a 7 billion parameter version of that model. There’s also, I think, a 13 and 70 billion parameter versions of that model compared to a GPT-4, maybe something like a trillion element model. So it’s far, far, far smaller, but when you fine tune that model with data to a specific use case, so if you’re an enterprise, you’re probably working on something fairly narrow and specific that you’re trying to do.

Maybe it’s a customer service application or it’s a financial services application, and you as an enterprise have a lot of data from your operations, that’s data that you own and you have the right to use to train the model. And so even though that’s a much smaller model, when you train it on that domain specific data, the domain specific results can be quite good in some cases even better than the large model. So you give up that kind of amazing general purpose use like when you’re using ChatGPT-4 and you can ask it everything from 17th century Italian poetry to quantum mechanics, if you narrow your range, these smaller models can give you equivalent or better kind of capability, but at a tiny fraction of the energy consumption.

And we’ve demonstrated a few times, even with just a standard Intel Xeon two socket server with some of the AI acceleration technologies we have in those systems, you can actually deliver quite a good experience. And that’s without even any GPUs involved in the system. So that’s just good old-fashioned servers and I think that’s pretty exciting.

That also means the technology’s quite accessible, right? So you may be an enterprise, you have a general purpose infrastructure that you use for a lot of things, you can use that for AI use cases as well. And if you’ve taken advantage of these smaller models that fit within infrastructure we already have or infrastructure that you can easily obtain. And so those smaller models are pretty exciting opportunities. And I think that’s probably one of the first things the industry will adopt to get energy consumption under control is just right sizing the model to the activity to the use case that we’re targeting. I think there’s also… you mentioned the concept of hardware-aware software. I think that the collaboration between hardware and software has always been an opportunity for significant efficiency gains.

I mentioned early on in this conversation how virtualization was one of the pillars that gave us that kind of fantastic result over the last 15 years. And that was very much exactly that. That’s bringing some deep collaboration between the operating system and the hardware to do something remarkable. And a lot of the acceleration that exists in AI today actually is a similar kind of thinking, but that’s not really the end of the hardware software collaboration. We can deliver quite stunning results in encryption and in memory utilization in a lot of areas. And I think that that’s got to be an area where the industry is ready to invest. It is very easy to have plug and play hardware where everyone programs at a super high level language, nobody thinks about the impact of their software application downstream. I think that’s going to have to change. We’re going to have to really understand how our application designs are impacting energy consumption going forward. And it isn’t purely a hardware problem. It’s got to be hardware and software working together.

Laurel: And you’ve outlined so many of these different kind of technologies. So how can enterprise adoption of things like modularity and liquid cooling and hardware aware software be incentivized to actually make use of all these new technologies?

Zane: A year ago, I worried a lot about that question. How do we get people who are developing new applications to just be aware of the downstream implications? One of the benefits of this revolution in the last 12 months is I think just availability of electricity is going to be a big challenge for many enterprises as they seek to adopt some of these energy intensive applications. And I think the hard reality of energy availability is going to bring some very strong incentives very quickly to attack these kinds of problems.

But I do think beyond that like a lot of areas in sustainability, accounting is really important. There’s a lot of good intentions. There’s a lot of companies with net-zero goals that they’re serious about. They’re willing to take strong actions against those goals. But if you can’t accurately measure what your impact is either as an enterprise or as a software developer, I think you have to kind of find where the point of action is, where does the rubber meet the road where a micro decision is being made. And if the carbon impact of that is understood at that point, then I think you can see people take the actions to take advantage of the tools and capabilities that are there to get a better result. And so I know there’s a number of initiatives in the industry to create that kind of accounting, and especially for software development, I think that’s going to be really important.

Laurel: Well, it’s also clear there’s an imperative for enterprises that are trying to take advantage of AI to curb that energy consumption as well as meet their environmental, social, and governance or ESG goals. So what are the major challenges that come with making more sustainable AI and computing transformations?

Zane: It’s a complex topic, and I think we’ve already touched on a couple of them. Just as I was just mentioning, definitely getting software developers to understand their impact within the enterprise. And if I’m an enterprise that’s procuring my applications and software, maybe cloud services, I need to make sure that accounting is part of my procurement process, that in some cases that’s gotten easier. In some cases, there’s still work to do. If I’m operating my own infrastructure, I really have to look at liquid cooling, for example, an adoption of some of these more modern technologies that let us get to significant gains in energy efficiency. And of course, really looking at the use cases and finding the most energy efficient architecture for that use case. For example, like using those smaller models that I was talking about. Enterprises need to be very aware of the energy consumption of their digital technologies, how big it is and how their decisions are affecting it.

Laurel: So could you offer an example or use case of one of those energy efficient AI driven architectures and how AI was subsequently deployed for it?

Zane: Yes. I think that some of the best examples I’ve seen in the last year were really around these smaller models where Intel did an example that we published around financial services, and we found that something like three hours of fine-tuning training on financial services data allowed us to create a chatbot solution that performed in an outstanding manner on a standard Xeon processor. And I think making those solutions available to our customers is starting to open people’s eyes how energy efficient you can be while not really giving up a whole lot in terms of the AI use case that you’re looking for. And so I think we need to just continue to get those examples out there. We have a number of collaborations such as with Hugging Face with open source models, enabling those solutions on our products like our Gaudi2 accelerator has also performed very well from a performance per watt point of view, the Xeon processor itself. So those are great opportunities.

Laurel: And then how do you envision the future of AI and sustainability in the next three to five years? There seems like so much opportunity here.

Zane: I think there’s going to be so much change in the next three to five years. I hope no one holds me to what I’m about to say, but I think there are some pretty interesting trends out there. One thing, I think, to think about is the trend of AI factories. So training a model is a little bit of an interesting activity that’s distinct from what we normally think of as real time digital services. You have real time digital service like Vinnie, the app on your iPhone that’s connected somewhere in the cloud, and that’s a real time experience. And it’s all about 99.999% uptime, short latencies to deliver that user experience that people expect. But AI training is different. It’s a little bit more like a factory. We produce models as a product and then the models are used to create the digital services. And that I think becomes an important distinction.

So I can actually build some giant gigawatt facility somewhere that does nothing but train models on a large scale. I can partner with the infrastructure of the electricity providers and utilities much like an aluminum plant or something would do today where I actually modulate my energy consumption with its availability. Or maybe I take advantage of solar or wind power’s ability, I can modulate when I’m consuming power, not consuming power. And so I think if we’re going to see some really large scale kinds of efforts like that, and those AI factories could be very, very efficient, they can be liquid cooled and they can be closely coupled to the utility infrastructure. I think that’s a pretty exciting opportunity. And while that’s kind of an acknowledgement that there’s going to be gigawatts and gigawatts of AI training going on. Second opportunity, I think in this three to five years, I do think liquid cooling will become far more pervasive.

I think that will be driven by the need to cool the next generation of accelerators and GPUs will make it a requirement, but then that will be able to build that technology out and scale it more ubiquitously for all kinds of infrastructure. And that will let us shave huge amounts of gigawatts out of the infrastructure, save hundreds of billions of gallons of water annually. I think that’s incredibly exciting. And if I just… the innovation on the model size as well, so much has changed with just the last five years with large language models like ChatGPT, let’s not assume there’s not going to be even bigger change in the next three to five years. What are the new problems that are going to be solved, new innovations? So I think as the costs and impact of AI are being felt more substantively, there’ll be a lot of innovation on the model side and people will come up with new ways of cracking some of these problems and there’ll be new exciting use cases that come about.

Finally, I think on the hardware side, there will be new AI architectures. From an acceleration point of view today, a lot of AI performance is limited by memory bandwidth, memory bandwidth and networking bandwidth between the various accelerator components. And I don’t think we’re anywhere close to having an optimized AI training system or AI inferencing systems. I think the discipline is moving faster than the hardware and there’s a lot of opportunity for optimization. So I think we’ll see significant differences in networking, significant differences in memory solutions over the next three to five years, and certainly over the next 10 years that I think can open up a substantial set of improvements.

And of course, Moore’s Law itself continues to advance advanced packaging technologies, new transistor types that allow us to build ever more ambitious pieces of silicon, which will have substantially higher energy efficiency. So all of those things I think will be important. Whether we can keep up with our energy efficiency gains with the explosion in AI functionality, I think that’s the real question and it’s just going to be a super interesting time. I think it’s going to be a very innovative time in the computing industry over the next few years.

Laurel: And we’ll have to see. Zane, thank you so much for joining us on the Business Lab.

Zane: Thank you.

Laurel: That was Zane Ball, corporate vice president and general manager of data center platform engineering and architecture at Intel, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review.

That’s it for this episode of Business Lab. I’m your host, Laurel Ruma. I’m the director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can also find us in print, on the web, and at events each year around the world. For more information about us and the show, please check out our website at

This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Giro Studios. Thanks for listening.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

The Download: weight-loss drugs, and the future of offshore wind Wed, 10 Jan 2024 13:10:00 +0000 This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Weight-loss drugs: 10 Breakthrough Technologies 2024

One-third of US adults have obesity, a condition that makes them more susceptible to heart disease, diabetes, and cancer. However, there’s huge hope that anti-obesity drugs—including Wegovy and Mounjaro—could help address this public health crisis.

While most were originally developed to treat type 2 diabetes, these medications help people lose weight by suppressing their appetite when injected once a week at home. Success stories are everywhere online.

These drugs aren’t perfect. Many patients must stay on the drugs for life to keep the weight off, and the long-term impacts of these treatments remain unknown. Nevertheless, the treatments could improve the health of millions of people. Read the full story.

—Abdullahi Tsanni

Weight-loss drugs is one of MIT Technology Review’s 10 Breakthrough Technologies for 2024. Check out the rest of the list and vote for the final 11th breakthrough—we’ll reveal the winner in April.

+ We’ve never understood how hunger works. But researchers think they’re getting closer to finally determining how this basic drive functions. Check out Adam Piore’s fascinating story.

What’s next for offshore wind

It’s a turbulent time for offshore wind power. Large groups of turbines installed along coastlines can harness the powerful, consistent winds that blow offshore, and can be a major boon to efforts to clean up the electricity supply around the world. 

But in recent months, projects around the world have been delayed or even canceled as costs have skyrocketed and supply chain disruptions have swelled. These setbacks could spell trouble for efforts to cut the greenhouse-gas emissions that cause climate change.

The question is whether current troubles are more like a speed bump or a sign that 2024 will see the industry run off the road. Here’s what’s next for offshore wind power.

—Casey Crownhart

The end of anonymity online in China

Anonymity online in China changed drastically last year. Following many smaller decisions that make posting anonymously more difficult, the biggest blow came in October when all social media platforms in China demanded that users with large followings display their legal names.

The government and the platforms argue that the new rule can help prevent online harassment and misinformation. But their argument conveniently neglects what anonymity—a right that has existed since the invention of the internet—has afforded people online. 

There’s no doubt that the introduction of the mandatory real-name rule will almost certainly lead to more strict and expansive restrictions for everyone. Perhaps the only glimmer of hope is that users all over China have not given up, and are still finding workarounds to stay anonymous. Read the full story.

—Zeyi Yang

This story is from China Report, our weekly newsletter giving you the inside track on all things happening in China. Sign up to receive it in your inbox every Tuesday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The SEC’s X account was hacked to promote bitcoin
X said the SEC failed to set up two-factor authentication to properly secure its account. (CoinDesk)
+ The crypto industry’s jubilation at the news was short lived. (NYT $)
+ Those in the know could have made a major profit from the scam. (WP $)

2 This AI gadget can use your apps for you
But don’t call the Rabbit R1 a smartphone replacement—yet. (The Verge)
+ As usual, CES is jam-packed with weird and wonderful products. (WP $)

3 A DeepMind spinoff wants to halve drug discovery times
Currently, it takes up to a decade and close to $3 billion to discover and develop a new drug. (FT $)
+ AI is dreaming up drugs that no one has ever seen. Now we’ve got to see if they work. (MIT Technology Review)

4 This US chip technology is fueling China’s encryption ambitions

Washington is unsure how—or if—they should attempt to limit its use. (NYT $)
+ Enterprising Chinese firms are repurposing Nvidia chips to circumvent export blocks. (FT $)
+ These simple design rules could turn the chip industry on its head. (MIT Technology Review)

5 There’s mounting evidence to suggest your mood is linked to your gut health
Ignore microbes at your own peril. (Knowable Magazine)
+ The hunter-gatherer groups at the heart of a microbiome gold rush. (MIT Technology Review)

6 Hollywood’s actors union has struck an AI voiceover deal
And consent is at its heart. (Bloomberg $)
+ Deepfake ads featuring unwitting celebrities are rife on YouTube. (404 Media)
+ How Meta and AI companies recruited striking actors to train AI. (MIT Technology Review)

7 Robotics labs across the world are teaming up to create a robot brain 🤖
Robots need lots of data to train on. Why not collaborate? (IEEE Spectrum)

8 We can’t agree on how worried we should be about ultra-processed food
Coming up with a better way to define what is and isn’t ultra-processed would be a good place to start. (WSJ $)

9 Quora is where the intelligent internet goes to die
It turns out there is such a thing as a stupid question, after all. (The Atlantic $)

10 Would you let an algorithm predict how long you’re going to live? 💀
It’s impossible to be sure of anything but death, taxes—and AI hype. (FT $)

Quote of the day

“There’s just so much f**king competition.”

—Joe Forzano, an unemployed software engineer, explains to Motherboard how intensely tough it is to land a new engineering gig in the age of AI.

The big story

The first babies conceived with a sperm-injecting robot have been born

April 2023

Last spring, a group of engineers set out to test the sperm-injecting robot they’d designed. Altogether, the robot was used to fertilize more than a dozen eggs. 

The result of the procedures, say the researchers, was healthy embryos—and now two baby girls, who they claim are the first people born after fertilization by a “robot.”

The startup behind the robot, Overture Life, says its device is an initial step toward automating IVF, and potentially making the procedure less expensive and far more common than it is today. MIT Technology Review has identified a half-dozen startups with similar aims. Some have roots in university laboratories specializing in miniaturized lab-on-a-chip technology.

But fully automating the process will be far from easy. Read the full story.

—Antonio Regalado

We can still have nice things

A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or tweet ’em at me.)

+ The gadget catalogs of yesteryear are really quite something.
+ I wouldn’t trust these extremely outdated entertaining tips—no brilliant guests and no chips!?
+ Everything you can expect in Tinseltown this year, starring slimmer budgets and Jenna Ortega.
+ Aspen Gay Ski Week sounds completely wild in the best possible way 🏳️‍🌈
+ The latest star to join the Minecraft movie? Mr Jack Black.

The end of anonymity online in China Wed, 10 Jan 2024 11:00:00 +0000 This story first appeared in China Report, MIT Technology Review’s newsletter about technology in China. Sign up to receive it in your inbox every Tuesday.

Happy New Year! I hope you had a good rest over the holidays and feel ready to take on 2024. But for one more time, please allow me to indulge in a look back at 2023.

At the end of last month, I published an essay reflecting on how the prospects for anonymity online in China changed drastically last year. Following many smaller decisions that make posting anonymously more difficult, the largest blow came in October when all social media platforms in China demanded that certain users with large followings display their legal names.

The government and the platforms argue that the new rule can help prevent online harassment and misinformation. While anonymity can be associated with wrongdoing, their argument conveniently neglects what anonymity—a right that has existed since the invention of the internet—has afforded people online. 

Who among us hasn’t participated in a niche online hobby that we didn’t tell our family about? Who insists that every online acquaintance call them by their real name? There’s comfort in knowing that my online persona and who I am in real life don’t have to be the same. Not everyone should, or deserves to, know everything about us. 

Scholars I talked to have observed and found evidence of many benefits that come with anonymity in China. It gives people the courage to speak up against censorship or provide communal help to strangers. “We are more likely to do what’s risky when we feel there’s more protection,” says Xinyu Pan, a researcher at Hong Kong University. It’s particularly important to marginalized groups, from women to LGBTQ individuals, who feel that their identities could attract harassment online. They can find comfort and community in anonymity.

This topic is important for me both professionally and personally. As a reporter, I’m always watching what people are saying online and working to extract important information from between the lines. But I’ve also used Chinese social media personally for more than a decade, and my profiles and communities mean a lot to me, whether as archives of my life’s moments or places where I met dear friends.

That’s why I wrote the essay. And I’m worried there’s more change to come. 

Vibe shifts are always small when they begin. I felt one earlier last year, when I started to notice little signs of aggression here and there that made me less comfortable sharing real-life experiences online. But soon they can begin to feel like a tsunami. And now, if people don’t want to end their digital lives, they don’t have much choice; the only option seems to be to give in and float with the waves, even if we don’t know where it’s taking us.

Consider that when it was first announced in October, platforms stated the real-name rule would only apply to accounts in more “serious” fields—people talking about politics, financial news, laws, health care. Even Weibo’s CEO, Wang Gaofei, replied to a user with 2 million followers who was worried about the rule, posting, “Took a look at [the] content. If it’s only an influencer sharing about their personal life, I don’t think they need to display their real names upfront.”

But as we’ve seen in the past, these kinds of “small” changes are really a slippery slope. Fast-forward to today and that Weibo user’s real name is already on their public profile. And other accounts on the platform that don’t engage in serious topics—pet influencers, comedians, artists, car bloggers—have all received messages that they need to display their names or their accounts’ reach will be restricted, essentially meaning they’d be shadow-banned on the platform. 

Meanwhile, some platforms have acted even more quickly to implement the rule thoroughly. Douyin, the Chinese version of TikTok, seems to be already displaying the real names of most users with more than 500,000 followers. And last week, accounts on Bilibili, a Chinese YouTube-like video platform, also started mass-displaying popular users’ real names. 

For people like me, this all proves that our fear is not overblown: the introduction of the mandatory real-name rule will almost certainly lead to more strict and expansive restrictions for everyone. The tendency to control more will always prevail, as platforms tend to err on the side of caution in China’s stringent censorship ecosystem.

Perhaps the only glimmer of hope I’ve found is that users all over China have not given up. Through rounds of previous changes that restricted anonymity, they’ve come up with all kinds of workarounds to protect themselves, either by adopting shared identities or entrusting a group account to post content for them. These solutions are not guaranteed to work in the long term, but I don’t doubt people will continue to come up with creative solutions that we haven’t even thought of yet. As always, to report on internet censorship in China is to report on the ingenious grassroots resistance. Perhaps that’s at least something to look forward to in 2024.

What do you think about the value of social media anonymity? Let me know where you stand by writing to

Catch up with China

1. A draft of a harsh new regulation regarding video games tanked the stocks of major Chinese tech companies and caused widespread market fears in December. Now, a Chinese official behind the regulation has been removed from his position. (Reuters $)

  • China’s domestic gaming industry was just starting to pick up after a lengthy freeze on game publishing approvals. (Pocket Gamer)

2. China has sanctioned five US defense companies for selling arms to Taiwan. (BBC

3. In the fourth quarter of 2023, Chinese electric-vehicle maker BYD officially outsold Tesla globally for the first time. (Wall Street Journal $)

  • The company is now spending 2 billion RMB ($281 million) to reward its dealers. (Reuters $)
  • Want to know more about BYD? It was on our 15 Climate Tech Companies to Watch in 2023. (MIT Technology Review $)

4. As China has set aggressive goals for decarbonization, “dinosaur” state-owned companies are being forced to pivot to using more renewable energy. (Financial Times $)

5. For two decades, major Chinese e-commerce platforms like Alibaba didn’t offer a “refund-only” option for buyers. That’s finally changed. (South China Morning Post $)

6. Thermo Fisher, a US-based biotechnology company, says it has halted sales of DNA collection kits to Tibet. The sales were criticized after it was revealed that the Chinese police used these kits for mass DNA collection. (Axios)

Lost in translation

If you call up or message a customer service representative in China today, there’s a high chance you will be answered by an AI chatbot masquerading as a human. But as the publication China News Service reports, the technology has brought more frustration than convenience, since it often gives completely irrelevant or boilerplate responses. The users end up wasting much more time and energy trying to circumvent the AI and get to a human representative. Even though the technology is not yet mature, AI customer service is prevalent because it’s a fairly easy way for businesses to cut costs. And its use will only expand: the AI customer service market in China is expected to grow threefold in five years.

One more thing

Have you ever seen a Chinese terra-cotta warrior looking so expressive? Well, it’s not real; it was generated by Alibaba’s newly released image-to-video model. The feature, called “Everybody is a dancing king,” can move any still image into a dance TikTok and is included in Alibaba’s AI app Tongyi Qianwen. Predictably, it’s going a bit viral on social media. Wanna watch the (generated) dance moves of Napoleon and Jeff Bezos? Scroll down in this story by the Chinese publication QbitAI.

A terra-cotta warrior in a museum, doing an expressive dab pose as part of a viral dance routine.
What’s next for offshore wind Wed, 10 Jan 2024 10:00:00 +0000 MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of our series here.

It’s a turbulent time for offshore wind power.

Large groups of turbines installed along coastlines can harness the powerful, consistent winds that blow offshore. Given that 40% of the global population lives within 60 miles of the ocean, offshore wind farms can be a major boon to efforts to clean up the electricity supply around the world. 

But in recent months, projects around the world have been delayed or even canceled as costs have skyrocketed and supply chain disruptions have swelled. These setbacks could spell trouble for efforts to cut the greenhouse-gas emissions that cause climate change.

The coming year and beyond will likely be littered with more delayed and canceled projects, but the industry is also seeing new starts and continuing technological development. The question is whether current troubles are more like a speed bump or a sign that 2024 will see the industry run off the road. Here’s what’s next for offshore wind power.

Speed bumps and setbacks

Wind giant Ørsted cited rising interest rates, high inflation, and supply chain bottlenecks in late October when it canceled its highly anticipated Ocean Wind 1 and Ocean Wind 2 projects. The two projects would have supplied just over 2.2 gigawatts to the New Jersey grid—enough energy to power over a million homes. Ørsted is one of the world’s leading offshore wind developers, and the company was included in MIT Technology Review’s list of 15 Climate Tech Companies to Watch in 2023. 

The shuttered projects are far from the only setback for offshore wind in the US today—over 12 gigawatts’ worth of contracts were either canceled or targeted for renegotiation in 2023, according to analysis by BloombergNEF, an energy research group.

Part of the problem lies in how projects are typically built and financed, says Chelsea Jean-Michel, a wind analyst at BloombergNEF. After securing a place to build a wind farm, a developer sets up contracts to sell the electricity that will be generated by the turbines. That price gets locked in years before the project is finished. For projects getting underway now, contracts were generally negotiated in 2019 or 2020.

A lot has changed in just the past five years. Prices for steel, one of the most important materials in turbine construction, increased by over 50% from January 2019 through the end of 2022 in North America and northern Europe, according to a 2023 report from the American Clean Power Association.

Inflation has also increased the price for other materials, and higher interest rates mean that borrowing money is more expensive too. So now, developers are arguing that the prices they agreed to previously aren’t reasonable anymore.

Economic trouble for the industry is global. The UK’s last auction for offshore wind leases yielded no bidders. In addition, a major project that had been planned for the North Sea was canceled by the developer in July. Japanese developers that had jumped into projects in Taiwan are suddenly pulling out as costs shoot up in that still-developing market.

China stands out in an otherwise struggling landscape. The country is now the world’s largest offshore wind market, accounting for nearly half of installed capacity globally. Quick development and rising competition have actually led to falling prices for some projects there.

Growing pains

While many projects around the world have seen setbacks over the last year, the problems are most concentrated in newer markets, including the US. Problems have continued since the New Jersey cancellations—in the first weeks of 2024, developers of several New York projects asked to renegotiate their contracts, which could delay progress even if those developments end up going ahead.

While over 10% of electricity in the US comes from wind power, the vast majority is generated by land-based turbines. The offshore wind market in the US is at least a decade behind the more established ones in countries like the UK and Denmark, says Walt Musial, chief engineer of offshore wind energy at the US National Renewable Energy Laboratory.

One open question over the next year will be how quickly the industry can increase the capacity to build and install wind turbines in the US. “The supply chain in the US for offshore wind is basically in its infancy. It doesn’t really exist,” Jean-Michel says.

That’s been a problem for some projects, especially when it comes for the ships needed to install wind turbines. One of the reasons Ørsted gave for canceling its New Jersey project was a lack of these vessels.

The troubles have been complicated by a single century-old law, which mandates that only ships built and operated by the US can operate from US ports. Projects in the US have worked around this restriction by operating from European ports and using large US barges offshore, but that can slow construction times significantly, Musial says. 

One of the biggest developments in 2024 could be the completion of a single US-built ship that can help with turbine installation. The ship is under construction in Texas, and Dominion Energy has spent over $600 million on it so far. After delays, it’s scheduled to be completed in late 2024. 

Tax credits are providing extra incentive to build out the offshore wind supply chain in the US. Existing credits for offshore wind projects are being extended and expanded by the Inflation Reduction Act, with as much as 40% available on the cost of building a new wind farm. However, to qualify for the full tax credit, projects will need to use domestically sourced materials. Strengthening the supply chain for those materials will be a long process, and the industry is still trying to adjust to existing conditions. 

Still, there are some significant signs of progress for US offshore wind. The nation’s second large-scale offshore wind farm began producing electricity in early January. Several areas of seafloor are expected to go up for auction for new development in 2024, including sites in the central Atlantic and off the coast of Oregon. Sites off the coast of Maine are expected to be offered up the following year. 

But even that forward momentum may not be enough for the nation to meet its offshore wind goals. While the Biden administration has set a target of 30 gigawatts of offshore wind capacity installed by the end of the decade, BloombergNEF’s projection is that the country will likely install around half that, with 16.4 gigawatts of capacity expected by 2030.

Technological transformation

While economic considerations will likely be a limiting factor in offshore wind this year, we’re also going to be on the lookout for technological developments in the industry.

Wind turbines still follow the same blueprint from decades ago, but they are being built bigger and bigger, and that trend is expected to continue. That’s because bigger turbines tend to be more efficient, capturing more energy at a lower cost.

A decade ago, the average offshore wind turbine produced an output of around 4 megawatts. In 2022, that number was just under 8 MW. Now, the major turbine manufacturers are making models in the 15 MW range. These monstrous structures are starting to rival the size of major landmarks, with recent installations nearing the height of the Eiffel Tower.

In 2023, the wind giant Vestas tested a 15 MW model, which earned the distinction of being the world’s most powerful wind turbine. The company received certification for the design at the end of the year, and it will be used in a Danish wind farm that’s expected to begin construction in 2024. 

In addition, we’ll likely see more developments in the technology for floating offshore wind turbines. While most turbines deployed offshore are secured in the seabed floor, some areas, like the west coast of the US, have deep water offshore, making this impossible.

Floating turbines could solve that problem, and several pilot projects are underway around the world, including Hywind Tampen in Norway, which launched in mid-2023, and WindFloat Atlantic in Portugal.

There’s a wide variety of platform designs for floating turbines, including versions resembling camera tripods, broom handles, and tires. It’s possible the industry will start to converge on one in the coming years, since standardization will help bring prices down, says BloombergNEF’s Jean-Michel. But whether that will be enough to continue the growth of this nascent industry will depend on how economic factors shake out. And it’s likely that floating projects will continue to make up less than 5% of offshore wind power installations, even a decade from now. 

The winds of change are blowing for renewable energy around the world. Even with economic uncertainty ahead, offshore wind power will certainly be a technology to watch in 2024.

Bringing breakthrough data intelligence to industries Tue, 09 Jan 2024 14:00:00 +0000 As organizations recognize the transformational opportunity presented by generative AI, they must consider how to deploy that technology across the enterprise in the context of their unique industry challenges, priorities, data types, applications, ecosystem partners, and governance requirements. Financial institutions, for example, need to ensure that data and AI governance has the built-in intelligence to fully align with strict compliance and regulatory requirements. Media and entertainment (M&E) companies seek to build AI models to drive deeper product personalization. And manufacturers want to use AI to make their internet of things (IoT) data insights readily accessible to everyone from the data scientist to the shop floor worker.

In any of these scenarios, the starting point is access to all relevant data—of any type, from any source, in real time—governed comprehensively and shared across an industry ecosystem. When organizations can achieve this with the right data and AI foundation, they have the beginnings of data intelligence: the ability to understand their data and break free from data silos that would block the most valuable AI outcomes.

But true data intelligence is about more than establishing the right data foundation. Organizations are also wrestling with how to overcome dependence on highly technical staff and create frameworks for data privacy and organizational control when using generative AI. Specifically, they are looking to enable all employees to use natural language to glean actionable insight from the company’s own data; to leverage that data at scale to train, build, deploy, and tune their own secure large language models (LLMs); and to infuse intelligence about the company’s data into every business process.

In this next frontier of data intelligence, organizations will maximize value by democratizing AI while differentiating through their people, processes, and technology within their industry context. Based on a global, cross-industry survey of 600 technology leaders as well as in-depth interviews with technology leaders, this report explores the foundations being built and leveraged across industries to democratize data and AI. Following are its key findings:

• Real-time access to data, streaming, and analytics are priorities in every industry. Because of the power of data-driven decision-making and its potential for game-changing innovation, CIOs require seamless access to all of their data and the ability to glean insights from it in real time. Seventy-two percent of survey respondents say the ability to stream data in real time for analysis and action is “very important” to their overall technology goals, while another 20% believe it is “somewhat important”—whether that means enabling real-time recommendations in retail or identifying a next best action in a critical health-care triage situation.

• All industries aim to unify their data and AI governance models. Aspirations for a single approach to governance of data and AI assets are strong: 60% of survey respondents say a single approach to built-in governance for data and AI is “very important,” and an additional 38% say it is “somewhat important,” suggesting that many organizations struggle with a fragmented or siloed data architecture. Every industry will have to achieve this unified governance in the context of its own unique systems of record, data pipelines, and requirements for security and compliance.

• Industry data ecosystems and sharing across platforms will provide a new foundation for AI-led growth. In every industry, technology leaders see promise in technology-agnostic data sharing across an industry ecosystem, in support of AI models and core operations that will drive more accurate, relevant, and profitable outcomes. Technology teams at insurers and retailers, for example, aim to ingest partner data to support real-time pricing and product offer decisions in online marketplaces, while manufacturers see data sharing as an important capability for continuous supply chain optimization. Sixty-four percent of survey respondents say the ability to share live data across platforms is “very important,” while an additional 31% say it is “somewhat important.” Furthermore, 84% believe a managed central marketplace for data sets, machine learning models, and notebooks is very or somewhat important.

• Preserving data and AI flexibility across clouds resonates with all verticals. Sixty-three percent of respondents across verticals believe that the ability to leverage multiple cloud providers is at least somewhat important, while 70% feel the same about open-source standards and technology. This is consistent with the finding that 56% of respondents see a single system to manage structured and unstructured data across business intelligence and AI as “very important,” while an additional 40% see this as “somewhat important.” Executives are prioritizing access to all of the organization’s data, of any type and from any source, securely and without compromise.

• Industry-specific requirements will drive the prioritization and pace by which generative AI use cases are adopted. Supply chain optimization is the highest-value generative AI use case for survey respondents in manufacturing, while it is real-time data analysis and insights for the public sector, personalization and customer experience for M&E, and quality control for telecommunications. Generative AI adoption will not be one-size-fits-all; each industry is taking its own strategy and approach. But in every case, value creation will depend on access to data and AI permeating the enterprise’s ecosystem and AI being embedded into its products and services.

Maximizing value and scaling the impact of AI across people, processes, and technology is a common goal across industries. But industry differences merit close attention for their implications on how intelligence is infused into the data and AI platforms. Whether it be for the retail associate driving omnichannel sales, the health-care practitioner pursuing real-world evidence, the actuary analyzing risk and uncertainty, the factory worker diagnosing equipment, or the telecom field agent assessing network health, the language and scenarios AI will support vary significantly when democratized to the front lines of every industry.

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

The Download: what to expect in AI in 2024 Tue, 09 Jan 2024 13:10:00 +0000 This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

AI for everything: 10 Breakthrough Technologies 2024

When OpenAI launched ChatGPT in November 2022, nobody knew what was coming. But that low-key release changed everything, and by January, ChatGPT had become the fastest-growing web app ever.

That was only the beginning. In February, Microsoft and Google revealed rival plans to combine chatbots with search—plans that reimagined our daily interactions with the internet. And while early demos weren’t great, the genie wasn’t going back in its bottle.

Never has such radical new technology gone from experimental prototype to consumer product so fast and at such scale. And what’s clear is that we haven’t even begun to make sense of it all, let alone reckon with its impact. Read the full story.

—Will Douglas Heaven

AI for everything is one of MIT Technology Review’s 10 Breakthrough Technologies for 2024. Check out the rest of the list and vote for the final 11th breakthrough—we’ll reveal the winner in April.

+ If you’re interested in learning more, check out Will’s story on the six big questions that will dictate the future of generative AI, for better or worse.

What to expect from the coming year in AI

Looking to the year ahead, all signs point to there being immense pressure on AI companies to show that generative AI can make money and that Silicon Valley can produce the “killer app” for AI.

This year will also be another huge year for AI regulation around the world. In 2023 the first sweeping AI law was agreed upon in the European Union, Senate hearings and executive orders unfolded in the US, and China introduced specific rules for algorithms. If last year lawmakers agreed on a vision, 2024 will be the year policies start to morph into concrete action.

But even as the generative-AI revolution unfolds at a breakneck pace, there are still some big unresolved questions that urgently need answering. Read the full story.

—Melissa Heikkilä

This story is from The Algorithm, our weekly newsletter covering the latest AI developments. Sign up to receive it in your inbox every Monday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 We’ve finally got a release date for Apple’s Vision Pro headset
If you’ve got $3,499 spare, mark February 2 in your diary. (The Verge)
+ Apple is training its retail staff on how to demo the headset correctly. (Bloomberg $)
+ These minuscule pixels are poised to take augmented reality by storm. (MIT Technology Review)

2 Things aren’t looking great for the Peregrine lunar lander
A fuel leak means it’s highly unlikely to make it to the moon after all. (WP $) 
+ It started experiencing difficulty just hours after launch. (FT $)
+ The US company in charge is worried it won’t be able to control it much longer. (BBC)

3 The OpenAI and New York Times lawsuit is getting ugly
The AI company has accused the newspaper of not “telling the full story.” (FT $)
+ The Times is the first major US media organization to sue OpenAI. (NYT $)
+ How judges, not politicians, could dictate America’s AI rules. (MIT Technology Review)

4 China claims to have cracked Apple’s AirDrop feature
To reveal the phone numbers and email addresses of previously-anonymous senders. (Bloomberg $)

5 Our skies are chock-full of satellites
And they’re both a blessing and burden to astronomers back on Earth. (NYT $)
+ Amazon and SpaceX are head to head in a battle for satellite internet dominance. (MIT Technology Review)

6 Your body’s cells communicate with each other about aging
When they no longer talk to each other, the body starts to decline. (Quanta Magazine)
+ The debate over whether aging is a disease rages on. (MIT Technology Review)

7 What isn’t plant-based these days?
Cynics might say it’s an easy way for companies to bump up prices. (The Atlantic $)

8 A major fantasy games publisher was caught out using AI-generated content 🔮
The Magic: The Gathering maker had originally denied any generative involvement. (Motherboard)
+ This artist is dominating AI-generated art. And he’s not happy about it. (MIT Technology Review)

9 It’s not just you—dating apps really are getting worse
And users are ditching them in favor of IRL serendipity. (Bustle)
+ Looking for love on the apps is getting more and more expensive. (FT $)

10 Vinted wants to make secondhand clothing our first choice 👚
No seller fees and fiddling around with postage, for starters. (The Guardian)

Quote of the day

“We can be confident we haven’t seen a warmer year globally since the birth of Christ.”

—Professor Piers Forster, interim chair of the UK’s Climate Change Committee, reflects on 2023 being named as the hottest year on record, Sky News reports.

The big story

Computer scientists designing the future can’t agree on what privacy means

April 2023

When computer science students and faculty at Carnegie Mellon University’s Institute for Software Research returned to campus in the summer of 2020, there was a lot to adjust to.

The department had moved into a brand-new building, complete with experimental devices called Mites. Embedded in more than 300 locations throughout the building, these light-switch-size devices measure 12 types of data—including motion and sound.

The Mites had been installed as part of a research project on smart buildings, and were quickly met with resistance from students and faculty who felt the devices would subject them to experimental surveillance without their consent.

The conflict has deteriorated into a bitter dispute, complete with accusations of bullying, vandalism, misinformation, and workplace retaliation. Read the full story.

—Eileen Guo & Tate Ryan-Mosley

We can still have nice things

A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or tweet ’em at me.)

+ I’d just love a helpful tidy mouse companion. 🐁
+ This Instagram account is a celebration of all kinds of jelly, and a thing of beauty.
+ The trick to making even the cheapest coffee beans taste better? It’s all in the tamping.
+ Happy birthday to Jimmy Page—80 years old today!
+ Man, they just don’t make choose your own adventures like they used to.

What to expect from the coming year in AI Tue, 09 Jan 2024 09:37:49 +0000 This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Happy new year! I hope you had a relaxing break. I spent it up in the Arctic Circle skiing, going to the sauna, and playing card games with my family by the fire. 10/10 would recommend. 

I also had plenty of time to reflect on the past year. There are so many more of you reading The Algorithm than when we first started this newsletter, and for that I am eternally grateful. Thank you for joining me on this wild AI ride. Here’s a cheerleading pug as a little present! 

So what can we expect in 2024? All signs point to there being immense pressure on AI companies to show that generative AI can make money and that Silicon Valley can produce the “killer app” for AI. Big Tech, generative AI’s biggest cheerleaders, is betting big on customized chatbots, which will allow anyone to become a generative-AI app engineer, with no coding skills needed. Things are already moving fast: OpenAI is reportedly set to launch its GPT app store as early as this week. We’ll also see cool new developments in AI-generated video, a whole lot more AI-powered election misinformation, and robots that multitask. My colleague Will Douglas Heaven and I shared our four predictions for AI in 2024 last week—read the full story here

This year will also be another huge year for AI regulation around the world. In 2023 the first sweeping AI law was agreed upon in the European Union, Senate hearings and executive orders unfolded in the US, and China introduced specific rules for things like recommender algorithms. If last year lawmakers agreed on a vision, 2024 will be the year policies start to morph into concrete action. Together with my colleagues Tate Ryan-Mosley and Zeyi Yang, I’ve written a piece that walks you through what to expect in AI regulation in the coming year. Read it here

But even as the generative-AI revolution unfolds at a breakneck pace, there are still some big unresolved questions that urgently need answering, writes Will. He highlights problems around bias, copyright, and the high cost of building AI, among other issues. Read more here

My addition to the list would be generative models’ huge security vulnerabilities. Large language models, the AI tech that powers applications such as ChatGPT, are really easy to hack. For example, AI assistants or chatbots that can browse the internet are very susceptible to an attack called indirect prompt injection, which allows outsiders to control the bot by sneaking in invisible prompts that make the bots behave in the way the attacker wants them to. This could make them powerful tools for phishing and scamming, as I wrote back in April. Researchers have also successfully managed to poison AI data sets with corrupt data, which can break AI models for good. (Of course, it’s not always a malicious actor trying to do this. Using a new tool called Nightshade, artists can add invisible changes to the pixels in their art before they upload it online so that if it’s scraped into an AI training set, it can cause the resulting model to break in chaotic and unpredictable ways.) 

Despite these vulnerabilities, tech companies are in a race to roll out AI-powered products, such as assistants or chatbots that can browse the web. It’s fairly easy for hackers to manipulate AI systems by poisoning them with dodgy data, so it’s only a matter of time until we see an AI system being hacked in this way. That’s why I was pleased to see NIST, the US technology standards agency, raise awareness about these problems and offer mitigation techniques in a new guidance published at the end of last week. Unfortunately, there is currently no reliable fix for these security problems, and much more research is needed to understand them better.

AI’s role in our societies and lives will only grow bigger as tech companies integrate it into the software we all depend on daily, despite these flaws. As regulation catches up, keeping an open, critical mind when it comes to AI is more important than ever.

Deeper Learning

How machine learning might unlock earthquake prediction

Our current earthquake early warning systems give people crucial moments to prepare for the worst, but they have their limitations. There are false positives and false negatives. What’s more, they react only to an earthquake that has already begun—we can’t predict an earthquake the way we can forecast the weather. If we could, it would  let us do a lot more to manage risk, from shutting down the power grid to evacuating residents.

Enter AI: Some scientists are hoping to tease out hints of earthquakes from data—signals in seismic noise, animal behavior, and electromagnetism—with the ultimate goal of issuing warnings before the shaking begins. Artificial intelligence and other techniques are giving scientists hope in the quest to forecast quakes in time to help people find safety. Read more from Allie Hutchison

Bits and Bytes

AI for everything is one of MIT Technology Review’s 10 breakthrough technologies
We couldn’t put together a list of the tech that’s most likely to have an impact on the world without mentioning AI. Last year tools like ChatGPT reached mass adoption in record time, and reset the course of an entire industry. We haven’t even begun to make sense of it all, let alone reckon with its impact. (MIT Technology Review

Isomorphic Labs has announced it’s working with two pharma companies
Google DeepMind’s drug discovery spinoff has two new “strategic collaborations” with major pharma companies Eli Lilly and Novartis. The deals are worth nearly $3 billion to Isomorphic Labs and offer the company funding to help discover potential new treatments using AI, the company said

We learned more about OpenAI’s board saga
Helen Toner, an AI researcher at Georgetown’s Center for Security and Emerging Technology and a former member of OpenAI’s board, talks to the Wall Street Journal about why she agreed to fire CEO Sam Altman. Without going into details, she underscores that it wasn’t safety that led to the fallout, but a lack of trust. Meanwhile, Microsoft executive Dee Templeton has joined OpenAI’s board as a nonvoting observer. 

A new kind of AI copy can fully replicate famous people. The law is powerless.
Famous people are finding convincing AI replicas in their likeness. A new draft bill in the US called the No Fakes Act would require the creators of these AI replicas to license their use from the original human. But this bill would not apply in cases where the replicated human or the AI system is outside the US. It’s another example of just how incredibly difficult AI regulation is. (Politico)

The largest AI image data set was taken offline after researchers found it is full of child sexual abuse material
Stanford researchers made the explosive discovery about the open-source LAION data set, which powers models such as Stable Diffusion. We knew indiscriminate scraping of the internet meant AI data sets contain tons of biased and harmful content, but this revelation is shocking. We desperately need better data practices in AI! (404 Media