Fostering responsible GenAI with WalkMe’s Vivek Behl
It is no secret that generative AI is offering plenty of benefits for global business operations. However, it has been suggested that with these new use cases could come an over-reliance or complacency with new technology.
A Deloitte survey recently found that more than half of people in the UK alone (52%) have heard of generative AI, with 26% having used the technology. With this in mind, Technology Magazine speaks with Vivek Behl, Digital Transformation Officer at WalkMe, about how organisations can better leverage AI tools to suit their business needs, whilst also ensuring that these tools are developed safely.
Behl argues that, with any new technology, if staff are using generative AI experimentally, even if it’s with the very best intentions to boost creativity and productivity, it is hard to guarantee that all businesses are using it safely.
How can businesses better equip their employees to safely leverage AI?
“Realistically, businesses can’t assume their employees are aware of the risks associated with generative AI. Organisations need to take control of the situation, so that they can start to unlock the benefits generative AI can bring, while also ensuring it is used in the right way.
“One obvious step, on the face of it, might seem to be blocking or banning generative AI applications. But while this may seem a rational, if disproportionate, response, it runs the risk of alienating staff and falling behind competitors who know how to use AI effectively. After all, generative AI applications can lead to massive productivity and creativity boosts when used correctly and safely.
“Instead of a blanket ban, organisations should take the responsibility of enabling the safe use of these new applications– beginning with understanding how and where employees are currently using generative AI. The greatest risks come from ‘Shadow AI’: employees using tools like ChatGPT or Second Brain with zero oversight or permission from their IT or security departments.
“Gaining visibility over the entire IT stack, including applications and websites that the business doesn’t regulate, is crucial to assessing the current state of play.”
How can organisations harness generative AI tools to their full potential?
“When they understand generative AI use, employers will have the knowledge they need to bring ‘Shadow AI’ into the light and fully enjoy the benefits generative AI technologies can bring. Chances are that most employees use AI applications for entirely legitimate purposes – such as research or finding the best way to communicate a message or solve a complex problem.
“Knowing this, organisations can make sure they offer employees the capabilities they need, backed up with full oversight, and a clear, unambiguous AI use policy that is safe for companies and individuals alike.
“Businesses’ responsibilities don’t end there – they have to ensure employees know how to use the tools at their disposal, and how to follow best practices. Ideally, this will be automated in real time: for instance, pop-ups that alert AI users when they are at risk of taking action or using tools that aren’t sanctioned by the organisation.
“Similarly, employees who want to perform a specific task should be able to access guidance that takes them through the appropriate workflows and minimises risk. Or perhaps organisations want to reroute employees away from certain generative AI applications to others that are safer. All of this can be provided to employees in the right context, right there on their screen, through a digital adoption platform (DAP) which gives step-by-step user guidance and automation to end users along with actionable data insights across an organisation’s tech stack.
How can they ensure that these use cases are responsible?
“As the AI landscape is constantly evolving, organisations must be prepared to continuously learn and relearn when it comes to AI. DAPs that automatically and intelligently understand employees' AI application use can take the onus off of the employee of having to learn and relearn safe AI usage policies and simply guide them in the moment.
“Organisations have a responsibility to provide their employees with the technology necessary to best do their jobs and the writing on the wall seems to be that employees want to use cutting edge technologies like generative AI that make their work easier and more efficient – businesses should strive to empower their employees’ ingenuity with the right guardrails in place rather than seek to stifle their use of new tech.
“With the right policies and technology guardrails in place, employees will be better equipped to optimise business resources and do their best work without compromising sensitive company information.”
When it comes to developing AI, how can businesses strike a good balance?
“Generative AI applications can be a tremendous boon to organisations' ability to create, innovate, and respond to opportunities before competitors. They can help to complete certain tasks quicker than before, and free up employees’ time and headspace for more rewarding activities.
“However, AI tools could also put sensitive information into the wrong hands – threatening privacy violations and damaging customer trust.
“Responding to these challenges requires organisations to strike a balance – guiding employees, without curbing their productivity. This means first understanding current AI use, using that understanding to inform a clear strategy, and providing the guidance and technology guardrails to ensure policies are followed and generative AI technology is safely used. This will take time but will pay off – with a workforce that uses generative AI to boost their own productivity, and does not introduce new risks to the business.”
******
For more insights into the world of Technology - check out the latest edition of Technology Magazine and be sure to follow us on LinkedIn & Twitter.
Other magazines that may be of interest - AI Magazine | Cyber Magazine.
Please also check out our upcoming event - Sustainability LIVE Net Zero on 6 and 7 March 2024.
******
BizClik is a global provider of B2B digital media platforms that cover Executive Communities for CEOs, CFOs, CMOs, Sustainability leaders, Procurement & Supply Chain leaders, Technology & AI leaders, Cyber leaders, FinTech & InsurTech leaders as well as covering industries such as Manufacturing, Mining, Energy, EV, Construction, Healthcare and Food.
BizClik – based in London, Dubai, and New York – offers services such as content creation, advertising & sponsorship solutions, webinars & events.
- Meta's AI Plans Face Environmental Challenges: ExplainedAI & Machine Learning
- How Cloud Provider Nasuni is Powering Leading Global BrandsCloud & Cybersecurity
- How Disney’s Use of AI/AR Impacts the Entertainment SectorAI & Machine Learning
- Contentful Webinar: How AI is Reshaping Content ManagementAI & Machine Learning