Next-generation, high-performance processor unveiled

The prototype for a revolutionary new general-purpose computer processor, which has the potential of reaching trillions of calculations per second, has been designed and built by a team of computer scientists at The University of Texas at Austin.

The new processor, known as TRIPS (Tera-op, Reliable, Intelligently adaptive Processing System), could be used to accelerate industrial, consumer and scientific computing.

Professors Stephen Keckler, Doug Burger and Kathryn McKinley have been working on underlying technology that culminated in the TRIPS prototype for the past seven years. Their research team designed and built the hardware prototype chips and the software that runs on the chips.

“The TRIPS prototype is the first on a roadmap that will lead to ultra-powerful, flexible processors implemented in nanoscale technologies,” said Burger, associate professor of computer sciences.

TRIPS is a demonstration of a new class of processing architectures called Explicit Data Graph Execution (EDGE). Unlike conventional architectures that process one instruction at a time, EDGE can process large blocks of information all at once and more efficiently.

Current “multicore” processing technologies increase speed by adding more processors, which individually may not be any faster than previous processors.

Adding processors shifts the burden of obtaining better performance to software programmers, who must assume the difficult task of rewriting their code to run well on a potentially large number of processors.

“EDGE technology offers an alternative approach when the race to multicore runs out of steam,” said Keckler, associate professor of computer sciences.

Each TRIPS chip contains two processing cores, each of which can issue 16 operations per cycle with up to 1,024 instructions in flight simultaneously. Current high-performance processors are typically designed to sustain a maximum execution rate of four operations per cycle.

Though the prototype contains two 16-wide processors per chip, the research team aims to scale this up with further development.

Source University of Texas at Austin


Substack subscription form sign up

43 thoughts on “Next-generation, high-performance processor unveiled”

  1. MainFragger,

    One reason why we use binary systems to perform mathematical operations is relative immunity to noise. Having a system that has only two states, high and low, means that there is a very large difference between the two states that allows noise to be ignored. To be more explicit, anything above a set threshold is high, and anything below is low. Thus, when noise, an inevitable intruder, is found on the signal, it can be amazingly high before it causes error. But, in a system that is trying to use multiple levels, such as your hexidecimal example, any noise that is greater than 1/32nd of the full scale will cause an error. The greater the precision required of a given signal, the smaller a given noise can be that will cause error. Thus, the system is very likely to be error prone.

    Another reason binary systems are used is cost. It is far easier to make on/off switches than it is analog amplifiers with the accuracy needed for higher modulo math systems. Analog computers were developed before binary computers proved to be far more economical. (As a youngster, I build such a simple analog computer, just for kicks, after reading about them in an electronics hobby book from a few decades earlier.)

    –Candice H. Brown Elliott

  2. In the future, we won’t even need or use ‘processors’.
    All this in the future nonsense is nothing short of a car commercial that has the words “announcing” & “the all new” mantras. dime a dozen and I might buy one

  3. I read the introductory PDF on their web site. Interesting stuff.

    What these guys are doing is trying to replace the superscalar architecture (which eats up a lot of transistors and power on architectures such as the x86 for things such as register renaming out of order execution). The advantage this has over x86s is that you could cram more cores on the same die because you’d be wasting less chip real-estate and you should be less sensitive to delays due to cache misses. You might also use the ALUs better. The advantage this has over GPU type processors is that GPU processors typically want to repeat the same operations on similar data (for example, processing 16 pixels in parallel), and if branches are taken, GPUs want to branch the same way for all the pixels. This architecture doesn’t have that same requirement.

    I think it’s a nice idea, but I doubt they could get anywhere close to the performance of x86 chips or GPUs any time soon unless they get major funding and access to the high end fabs. Still, it’s interesting research… cool to see people trying a different approach.

  4. . simulation (medical, climate, vr)
    . ray-traced gaming at LAST!
    . compiling my linux kernel on the fly :-) … long live my hypervisor !
    . real-time 3D effects
    . computer-aided sensorial
    . mind-control !!! Woehoe!
    . advanced weapon systems
    . top speed navigation in space
    . AI
    . 3D radar systems
    . weather prediction
    . Performing multi-dimensional analyses on the TORA
    . Pi^2
    . forget about huge clusters, think pizza-box supercomputing

    and many many more

  5. You got it right!

    It resembles a GPU.
    Actually you already have it’s human-made brother on your PC.

    The different is that while the GPU already have things optimized for a specific task (Graphics)… this stuff is specialized in building automatically and localy circuits for new specific tasks (like a GPU.

    This means circuits will not need people to build something like a GPU… but for many different specialized tasks that will be optimized temporarily in hardware (circuit programming) to get a similar result.

    Number-cracking is another specific task being introduced in our PCs… BUT they are general as they should be.

    Now imagine the following scenario:
    You have a cipher to crack (just an example)… you have millions of chips specialized in AES… any variation of AES will be a problem… you loose a lot of time to program a new chip…

    You need to adapt your high-level programming, optimization, change circuits. Such solution exists for decades. But is expensive and local to certain services… OTHER services also have need of it… MORE available power is needed… And cheaper!

    NOW you can have a cheap solution to be integrated in higher quantities… and easily ported to other different tasks. Naturally this as a lot of usages. Though not the common user needs who are already using GPUs. They are already using this thing (limited) in their PC’s.

    So this is not a new generation CPU, just the generalization of what exists in a cheaper way. Naturally this is a very powerful tool… specially (and this is the interest of it) because it will be cheaper and available in wider quantities.

    The result of a more vast and/or increased use depends of the usage given to it… And the power of it increases the power of the way it is used. Personally, while I find it promising in some areas, its also scary on others. That’s the usual mankind’s problem: Power! and the lack of wisdom to only use it right.

    Cheers.

    P.S. – I did digress, sorry.
    As I did I’ll add a note to another user that sugested many valued good applications… and some bad and/or some that just seem silly:
    The user mentioned Mind-Control… Well that’s not so silly if we look from the right angle. For example: We do mind control all the time. Just ask a mind doctor… or an advertise maker… or a political campaign expert… or… You got the idea, so let’s keep it simple.

    The fact is that people are very limited to what is familiar to them… and since they are mostly familiar to what is GIVEN to them (ex. TV, NEWS) that is one problem that propaganda exploits. There are other problems, but this is not the place. Just consider that a fish in the water does not see it. And that a fairly intelligent person is easy to fool with it’s own words and it’s own limitations because their words are felt important and the limitations are not recognized. We also make our own waters… and churches/school/media show how fragile we are to the social environment that builds our believe system.

    Best wishes.

  6. “Their research team designed and built the hardware prototype chips and the software that runs on the chips.”

    Hmmm. A Micro Operating System for Beta Use of any Macro Operating System? ….. which is Really a Roadmap to Route Highly Enriched Information to and from Root Sources/Intelligent Servers?…….. SMARTer chips?

    Or merely more Intelligent Programmers in AI Environments/Virtual Domains? [Intelligent as in Viably Imaginative]

  7. Web servers doesn’t care about X86 compatibility as long as the software is ported to the platform.

  8. weI’m wondered most of todays (binary) logic problems have a single input flow of instructions So what is the use of having so many calculations side by side? It is powerfull but where, in what fields would this be required. I dont think normal PC would require it (programes with 1024 threads are unlikely). I can imagine it would be handy in copmutated biochemics but what would the other target fields be ???

  9. I think the key consideration when designing an architecture for general use is the programmers. If you can make something which provides a simple interface to users at the lowest level (that is provides a very straight forward instruction set) then I’d expect adoption to be highly encouraged. Of course, there needs to also be economical benefits all way around. If the chip is expensive as hell, hard to obtain in bulk, unstable, unscalable and so on, then there’s going to be no real reason to adopt it at all.

    Given that intel announced today that they’re opening things up, I suspect that it will be easier for alternatives such as this to be adapted to work with existing PCs. If this CPU could provide even some sort of x86 emulation at the low level while keeping alternative features readily available, then it stands a great chance at being a success.

    I wish them the best, I may have to consider Austin now for my master’s, I can’t wait to really get hard core with this stuff… :)

  10. I just want to say something to the naysayers. If you read more closely, this work is being funded by DARPA. If that is your criteria for predicting failure, just look to the Internet, another DARPA-funded creation. Reading further on, the research team is not just creating another “academic” solution, but is pursuing all avenues that will provide an end-user usable solution – hardware, software, the whole package. It’s taken them about 7 years to get this far, and if DARPA is still funding them after 7 years, then there must be something real to this.

  11. Um… A wide (and ridiculously long?) processor pipe does not invalidate the usefulness of a processor.

    Specialized chips are seen everywhere; you DO know what memory controllers and GPUs do, right? Please?

    Programmable Logic Circuits are not evil; they’re used in everything from your car to your oven.

    You are insanely paranoid and should go away :D

  12. EDGE doesn’t “issue” instructions in the same way that a typical processor does, but instead issues data to a set of execution units that have been pre-issued with instructions. The allows it to offer considerable parallelism without difficult programming. It would be worthwhile reading the resources available at the home web site here http://www.cs.utexas.edu/~trips/ before posting more.

    As for making it open source… ever tried to work on something for seven years without corporate backing?

  13. This looks like todays graphic cards chips, like the G80 or more like the RV650, RV600, RV630 and RV610. The R600 can make ~0.5TFLOPS.

  14. Basically this is an expansion of the programmable circuits used for decades by the NSA to crack ciphers. The news is that it allows an automatic programming of the circuits.

    The result is that only very repetitive tasks are optimized in circuit programming. This works well with cipher cracking, war-weather-economic-social simulation/prediction that is been used for decades.

    An U.S. general once said about the end of WW-II: The germans lost the War but the NoZIs won it! … This is a better tool for them to do more efficiently what they have been done for the last 50 years. The military-industrial complex will appreciate!

    To the public this tool is useless. So take your ideas out!
    You won’t need it. Period. It is not for you!

    Cheers.

Comments are closed.