Amanda Blevins

Amanda Blevins

Share
Technology Magazine sits down with Amanda Blevins, VP and CTO Americas at VMware, to discuss announcements at VMware Explore, partnerships and AI

Amanda Blevins is a Vice President and Chief Technology Officer for the Americas at VMware. She enables customers and partners to achieve their business objectives by co-creating a comprehensive technology strategy with executive leaders.

Blevins leads the Office of the CTO Global Field Programs that develops technical talent and provides programme members the opportunity to participate in Office of the CTO projects and initiatives. 

Blevins has been in the IT field for more than 25 years, and has heldvarious infrastructure operations and architecture roles throughout her career.

As VMware’s first and only female Chief Technologist, she leverages her experience and network to drive awareness and equality for women and all underrepresented people in technology fields.

At the 2023 edition of VMware Explore in Las Vegas, Technology Magazine sat down with Blevins to discuss her career, her role and some of the exciting announcements in the world of technology and AI.

Tell us about yourself and your role at VMware

“Currently I'm the Americas CTO at VMware. So we have three Geo Field CTOs and our responsibility is to interact with customers and partners, talk about our vision strategy, how we can solve their business problems, and then work with the teams in R&D to influence product roadmap and innovation based on those trends and those conversations.

“I started at VMware 13 or so years ago, as a systems engineer in the field. I live in Colorado, and at the time we had around 4,000 employees and five products, so it was a little different. And so it was just two account teams, a rep and an SE and we just split the entire state.

“So I had 75 customers or something that I covered in Colorado, Nebraska and New Mexico. Over time, I moved up through the ranks inside VMware, became a principal systems engineer — the first woman in the company to do that — and then moved to the office of the CTO and became a chief technologist. And I moved into this role a couple of years ago. 

“It's a great place to be, with a great culture and with fantastic people. Technology is always changing. We're always adding directions and building on what we have and expanding our portfolio. So the technologist in me does not get bored at all.”

Tell us a bit about some of the announcements at Explore this year and what you're most excited by

“We've spent a lot of time interacting with Chief Data Officers, CTOs, CIOs and data scientists at various companies across various industries around the globe to really understand what they're doing or wanting to do with generative AI and large language models. All of that information and other information in the industry has helped us define our path forward.

“One of the most critical areas is around privacy, making sure that our customers can do AI/ML where their data is — whether it's in the public cloud, on-prem or at the edge.

“Really, what we've learned is that organisations are still figuring it out. They're still experimenting and so using public cloud services is great, but then also they want other options and choices. That's where a lot of the open source models come in and that's where they can run on-prem.

“From a VMware perspective, we have our ethical AI approach where things like explainability and openness is important, so partnering with organisations like Hugging Face, which is an organisation that offers services that align with our values.

Because of the speed the industry is moving, having partnerships like Hugging Face and Anyscale Ray, and of course Nvidia, across those various parts of the ecosystem is very important, through ML ops down to hardware.”

How do you work with partners at VMware?

“We've partnered with Nvidia for as long as I've been here, probably longer. The original use cases were around virtual desktops and people doing CAD drawings and high graphical intensive use cases, so making sure we can support dGPUs for virtual desktops. 

“That naturally evolved into ML workloads, and we have hundreds of customers that do ML on VMware today because we've always partnered with Nvidia. Making sure that we support the hardware is one aspect, but now making sure that our platform of Cloud Foundation can integrate Nvidia AI enterprise at the higher level services that we don't provide.

“That's why we have partnerships with Nvidia, Hugging Face, Anyscale and others. We provide the best platform for workloads and for scheduling and for operations, and keeping that operation model the same, regardless of it's an AI/ML workload or a traditional workload or a modern app workload. Now that IT needs to be in charge of that, it's not just data scientists anymore. That's why Cloud Foundation is so important.

“It’s about making sure that our platform is ready to easily automate and accept those workloads. Most AI/ML workloads are deployed as containers, so Tanzu Kubernetes grid is built right into Cloud Foundation, so that's a pretty natural step forward.”

Tell us about the importance of specialism when it comes to building LLMs

“Performance is important to us, cost is important to us and sustainability is important to us, and all those things are related. There's a good analogy that I heard that I like to repeat, where if you ask ChatGPT, ‘how do I make toast?’, that prompt and that query return might cost you a dollar. That's a really expensive slice of toast. 

“If I have a domain specific model that's only for making toast, and so it's much smaller, it doesn't require as many resources to run, it costs less. So when I ask it how to make toast, maybe it's two cents and that's much more appropriate.

“So, from a sustainability perspective, it's not taking as much power and cooling, and also from a cost perspective, I don't need as much horsepower, or as many resources. And so that is really important.

“There’s also the other aspect that if you just have a general model that hasn't been fine-tuned on your domain specific data, you're not going to get as helpful responses. Being able to have that domain specific data is important for business value, but also because to create an LLM, it takes months and tens of thousands of GPUs and most organisations don't have that, and that's not going to happen. 

“Being able to take an LLM that is trained originally with data that they trust, but that open data set's going to be important and then be able to put that next to their data that they're going to fine tune with is going to be important. And then that will make sure they have the best responses, but also the lowest cost, the best performance and more sustainable choice.”

Is AI the focus for VMware going forward?

“It's not the only thing, but it's definitely an important thing. I think back to all the solutions that already have AI/ML embedded in them. Even if you think of Aria Operations, that is essentially an analytics engine. Some people might call it a monitoring tool, but that's not true. As long as you give us SQL time series data, we'll be able to tell you whatever that data is.

“It could be about the performance of your call centre and the people and the type of calls they're taking. We've had customers use it for that, but as long as we get that data, we can tell you what's normal and then tell you when something abnormal happens. And we've had that solution for over a decade. But then also the ML that we use within NSX today around advanced threat protection, where it could be a malicious payload that has a signature and that's easy to identify, or one that's acting strange that it shouldn't, and we can identify because of its behaviour.

“We have a number of examples of where AI has been embedded in our solutions and now it's around, okay, this technology is more accessible, it's easier to use, it's expected from the industry. That just speeds up our innovation in those areas, but it's a path we've been on for a while.”

Share

Featured Interviews

Featured

Mark Opitz

Group Head of ICT – ACCIONA Australia and New Zealand

ACCIONA Australia Head of ICT Mark Opitz on how the company’s digital transformation journey is revolutionising sustainable infrastructure development

Read More

Rachel Bence

CIO at Queen Mary University of London

Rachel Bence, CIO at Queen Mary University of London, blends her research and IT management expertise to drive digital transformation and inclusivity

Read More
We use a secure-by-design approach, integrating security measures from the inception of service design and rigorously assessing the cybersecurity practices of its supply chain
Rachel Bence
CIO at Queen Mary University of London

Kate Flanagan

Executive General Manager IM & Technology at Roy Hill

Kate Flanagan, Executive General Manager IM & Technology at Roy Hill, details her journey in the mining industry and explains what keeps her motivated

Read More

Theresa Campobasso

Senior Vice President of Strategic Accounts at Exiger

Supply chain risk specialist Theresa Campobasso cut her teeth in the US military

Read More

John Bailey

SVP of Technology & Innovation at AVI-SPL

SVP of Technology & Innovation at AVI-SPL, John Bailey, discusses a career driving innovation in communications and AV technology

Read More

Mark Opitz

Group Head of ICT – ACCIONA Australia and New Zealand

ACCIONA Australia Head of ICT Mark Opitz on how the company’s digital transformation journey is revolutionising sustainable infrastructure development

Read More