Home > Intel > The man who manages Intel’s Big Data revolution

The man who manages Intel’s Big Data revolution

VR-Zone sat down with Ron Kasabian, Intel’s Manager of Big Data Solutions, at Intel’s Big Data Summit in Ho Chi Minh City, to talk about the company’s Big Data push.


Intel is obsessed with Big Data.

As the amount of data in the world is growing exponentially because people are creating more data. Between Facebook’s one billion users uploading dozens of photos, to a city’s traffic cameras recording HD video, to re-sequencing genomes to develop new cancer drugs, the world has a lot of raw data to sort through.

Moving through this raw data requires not only a muscular computer, but processors and program stacks optimized to sort through this data efficiently.

There’s big business is sorting through the petabytes of data the world produces, and Intel is trying to establish itself as a dominant player in the field.

VR-Zone sat down with the man that manages Intel’s Big Data efforts, Ron Kasabian, to talk about some of the server technology that Intel is using.

VR-Zone: Processing big data isn’t an entirely new concept, but Intel’s concerted effort to do so is. What’s new?

I think we’re taking a little bit more of an end-to-end view on it this time. Instead of just being just ‘let’s enable the ecosystem to grow our infrastructure,’ we’re looking at the end user not just the independent software vendors (ISV) and asking what role we can play as an enabler and partner to help this wave to continue to move. At the end of the day for us, the more the uptick in big data solutions the better.

VRZ: IBM is pushing its POWER platform to be a competitor to Intel on the server front. What advantage does Intel have over IBM’s POWER platforms?

We’ll continue to do what we’ve done in the server business. POWER has been a competitive architecture for us for years, we’ll continue to push against it.

In the big data space we’re focusing our efforts on taking key components like Hadoop and helping Hadoop run better on our architecture, and we’ll also look for opportunities for certain Hadoop and Big Data workloads — for example video workloads are drastically different than genomic workloads. We’re looking at specific workloads that we think are going to be high use, and [ask ourselves] do we develop accelerators and optimizers on our silicon to help those loads run better.

Also, in rackscale architecture: does it make sense to have a Big Data rackscale offering? We don’t know the answer to that yet.

What are we going to actually do? What we’re doing is understanding the gaps: what the issues and challenges are for people. Then, we’ll formulate a product roadmap and ecosystem.

VRZ: What’s Intel’s take on using GPGPU compute in Big Data scenarios?

We haven’t talked a lot about it. We’re going to continue to use Intel architecture. Our graphics capabilities are improving. For graphic-type applications like video, we’ll continue to use x86 core and we’ll use accelerators, or optimizers, or ASICs to be able to perform specific tasks that might be more graphics intensive.

VRZ: Do you see ARM-based servers as a threat to Intel’s dominance in Big Data?

Is it a threat? Yes. Are we worried about it? No. We’ve got Avaton coming out and I think we’ve got a good answer to what ARM is going to be hitting us with in the Data Center. For Big Data or for anything else. We’re really happy at what Avaton is looking like — a solid small server solution.

VRZ: Considering Intel’s partnerships in the Big Data sector, what partnership gives Intel the biggest edge over the competition?

I think Hadoop is going to give us the biggest advantage. Hadoop is going to be as prevalent in Big Data as Linux is in operating systems. We spend millions enabling Linux to run best on Intel, we have for 20 years and will continue to do that. We’re taking that piece of the playbook and doing the same thing with Hadoop.

VRZ: Has working closely with Hadoop damaged Intel’s partnership with Oracle? After all, they are competitors in many ways.

No. Oracle, IBM, we’re customers, we’re partners, and we compete in lots of different ways. We’re used to the weird dynamics. IBM had the big PC business, and they still have an x86 server business yet they have POWER.

‘Frenemy’ is what we call it.

VRZ: Looking at Big Data, what’s the biggest challenge for hardware?

I think it’s understanding the workload. If you think about a server’s SoC core and I’ve got a specific algorithm let’s say that I know is going to be run prevalently for video, transcoding, and analysis, I could develop an accelerator which is essentially a separate IP block sitting on an SoC so that the CPU core doesn’t have to consume resources performing that task. It can farm it out to a separate place on the silicon.

We’ve got a number of workloads in our labs that we’re analyzing. We’ll start working with our product roadmap folks to figure out what and where we put [the accelerators] into silicon.

VRZ: Looking forward to 2014, what excites you the most about Big Data?

It’s the opportunities. That’s also what worries me the most — how fast can we get there? I think there are some huge problems for society that can be solved with Big Data. Genomics is a huge one. There are many of us at Intel who are as passionate about solving those problems as we are making sure our business is successful.

To me, the most exciting thing is being able to use Intel architecture to solve problems. Not just as a machine in the back-end running these computations, but as a solution stack with Hadoop and other components.


Leave a Reply

Your email address will not be published.

Read previous post:
Witnessing the rebirth of Eorzea

We recount our experiences with Final Fantasy XIV: A Realm Reborn's beta and introduce players to what they can expect...