• Home
  • Startup
  • Money & Finance
  • Starting a Business
    • Branding
    • Business Ideas
    • Business Models
    • Business Plans
    • Fundraising
  • Growing a Business
  • More
    • Innovation
    • Leadership
Facebook Twitter Instagram
  • Newsletter
  • Submit Articles
  • Privacy
  • Advertise
  • Contact
Facebook Twitter Instagram
FundsEdu.Com
  • Home
  • Startup
  • Money & Finance
  • Starting a Business
    • Branding
    • Business Ideas
    • Business Models
    • Business Plans
    • Fundraising
  • Growing a Business
  • More
    • Innovation
    • Leadership
Subscribe for Alerts
FundsEdu.Com
Innovation

How To Scale For AI In The Wake Of Moore’s Law

adminBy adminAugust 31, 2023No Comments4 Mins Read

Video: Thompson’s talk goes over a lot of what we have to think about for the next steps in AI/ML.

We’re certainly interested in what this fellow has to say – about modern progress and its context, and where we’re going with things like large language models…

Neil Thompson starts out talking about underlying trends and the emergence of generative AI, which we’ve seen in technologies like GPT and Stable Diffusion.

Setting the stage, he notes that some of these models take many millions of dollars to build, and he invites us to think about a lot of the logistics that are going to have an impact, on markets and beyond.

However, he quickly pivots to an analysis of an older technology, computer vision, for which he contends we have a decade of useful data to work with. That. He posits, might be a good guide to the future.

Showing an example with the ImageNet database, Thompson describes how computer vision progressed quickly, and looks at relevant transitions for these kinds of smart programs.

“We notice that we’re on an exponential scale,” he says, “we see this very, very smooth transition. And I think if you look at this, you can say, ‘Boy, I really understand why it feels like AI is improving so fast,’ right? … you can make these nice projections.”

He also asks what’s under the hood, and that’s where you get to a critical theory of how these systems are going to consume resources.

More computing power, he notes, gets expensive quickly, but also generates a lot of carbon dioxide.

As we’ve been trying to manage the carbon footprint, Thompson suggests, we’ve also increased the size of the model, which increases the footprint even more.

“We have more than taken back the benefits of the efficiency improvements in order to expand these models,” he says, enumerating some of the problems to be solved. “So this, really, is a huge effect. And we need to be thinking about it. Because as these models get bigger … this graph is already showing you that the carbon dioxide might be an issue, (and) there’s a whole second thing, which is just that these models get more and more expensive.”

Positing a kind of resource scarcity around larger AI systems, Thompson suggests it might lead to less diversity of models, which can be concerning.

He also displays some charts of computational demand with deep learning, from the 1950s through current times.

Looking at doubling rates and everything else, you see the kinds of hockey stick projections that make it important for us to think about what comes next.

“(That should) set off your spidey sense, right?” he says. “Because … it sounds an awful lot like Moore’s law, right? And that’s really what’s going on here, is … people are spending about the same amount of money to build a new system, and the new system is more powerful, so they get more out of it. And then they update their computers, again, at about the same cost. And they can do more and more.”

For instance, Thompson refers to a “takeoff” in 2009-2010 based on using new kinds of specialized GPUs and multicore chips for AI/ML operations.

Moore’s law, he says, is coming to an end – and if you’ve been paying attention to many of these speakers, you’ve probably already heard that.

With that in mind, Thompson goes over some alternatives for future progress, including hardware accelerators, quantum computing and algorithmic improvements.

However, some of these improvements are less than consistent, and obviously, some are still in the theory stage.

The challenge, he said, is to find the performance that we need for next-generation systems; he tells the audience:

“This is what my lab’s working on, trying to understand where are we going to get this performance if we want to keep moving up this AI curve, and getting more and more performance the way that we want to?”

It’ s something for engineers and others to think about as we craft newer specialized chips, beef up algorithms, and go after the elusive value of quantum computers.

Read the full article here

Related Articles

ASUS Zenbook S 16 Review — Ryzen AI 9 HX 370 Processor At Its Best

Innovation December 17, 2024

FBI Hacking Warning—More Bad News For iPhone, Android Users

Innovation December 16, 2024

We’ll Need To Anticipate AI Using A Lot Of Resources In Tomorrow’s World

Innovation December 15, 2024

NYT ‘Connections’ Hints And Answers For Sunday, December 15

Innovation December 14, 2024

A 2024 Gift Guide For The Dungeons And Dragons Dungeon Masters

Innovation December 13, 2024

Meet 5 ‘Otherworldly’ Ancient Animals—Preserved In Stunning Detail At This Iconic Fossil Hunting Site

Innovation December 12, 2024
Add A Comment

Leave A Reply Cancel Reply

Sections
  • Growing a Business
  • Innovation
  • Leadership
  • Money & Finance
  • Starting a Business
Trending Topics
  • Branding
  • Business Ideas
  • Business Models
  • Business Plans
  • Fundraising
© 2025 Startup Dreamers. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • Press Release
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.