Home Uncategorized AI washing is dirty business. Lenovo’s COO explains how to avoid it

AI washing is dirty business. Lenovo’s COO explains how to avoid it

41
0
AI washing is dirty business. Lenovo’s COO explains how to avoid it

Kerry Wan/ZDNET

Whether it’s impressive in every case or not, artificial intelligence (AI) seems to be everywhere. However, some of what’s marketed as AI isn’t even really AI — just a product with the label slapped on to boost interest and attention. 

This practice of making excessive claims about AI is called AI washing. While it may seem harmless, AI washing can reduce the integrity of AI solutions, make it harder to see what really works, and complicate how we evaluate the success of this evolving technology. 

Also: How Gen AI’s balance of power is shifting

I had the chance to talk with Lenovo’s Linda Yao, COO and Head of Strategy for the Solutions & Services business and Vice President of Al Solutions & Services about the concept, what it means for business, what to watch out for, and what you can do to ensure your AI efforts are transparent and credible. 

ZDNET: Please introduce yourself and give us some background about your role at Lenovo.

Linda Yao: As part of Lenovo’s highest-growth business, my responsibility is building the AI Services practice so that we continue innovating with our customers to solve their most interesting challenges.

Our AI center of excellence wields core competencies across security, people, technology, and processes that help customers implement the right AI strategies and solutions for their use cases. Our mission is to help organizations move successfully from AI concepts to real results by scaling AI quickly, responsibly, and securely.

In addition, I lead strategy and operations for the business unit, which provides ample opportunities to drink my own champagne and deploy AI that transforms our operational processes and the customer experience.

ZDNET: How do you define AI washing, and why is it a growing concern in the tech industry?

LY: The promise of artificial intelligence has long captured our imaginations, especially now that generative AI has become easily accessible to us on an organizational as well as personal level. Because its potential appears unbounded, there’s an urge to associate this newfound technology as a cure for everything.

Also: AI accelerates software development to breakneck speeds, but measuring that is tricky

While Lenovo’s data shows that just about every company is increasing their investments in AI, it also shows that three out of five of those companies aren’t confident in the return on that investment (ROI). It’s not clear whether those AI implementations are delivering meaningful business outcomes to their organizations yet.

Because AI’s impact isn’t yet well-defined, and the technology itself isn’t transparently understood by everyone, we leave room for interpretation and embellishment. In this way, the term AI washing draws a parallel to greenwashing, wherein companies might make speculative claims about the environmental benefits of their products.

Although I don’t believe it’s done nefariously, AI washing can lead to skepticism and distrust among consumers and stakeholders, diminishing the appreciation and trust in genuine AI advancements coming down the pipeline.

ZDNET: What are the long-term implications of AI washing for businesses and consumers?

LY: For businesses, there’s [a] real fear of missing out (FOMO). The risk of AI washing is that it can divert management attention and resources away from practical AI innovation. Instead of investing in developing meaningful AI capabilities, providers might be led to misguided investments or superficial enhancements that slow down the real progress they could be making with the technology.

For enterprises on the receiving end, AI washing complicates decision-making. These businesses may struggle to identify truly valuable AI solutions amidst the noise, potentially leading to wasted investments in underwhelming technologies. This can hinder digital transformation efforts, stifle innovation, and jeopardize business performance.

Both providers and business users can benefit from working with trusted AI partners who take proactive steps to use AI responsibly, but also take an ethical approach in advising on the right AI choices.

Also: 3 ways to help your staff use generative AI confidently and productively

The impact of AI washing to consumers hits closer to home: data security and privacy risks from poorly designed AI technology, and subpar user experiences or disillusionment with technology that fails to meet quality expectations. Consumers will be looking for brands they trust, technology and form factors that have served them well in the past, training and learning opportunities to make AI more accessible, and transparency from their vendors on AI use.

ZDNET: How can companies ensure their AI claims are accurate and ethically sound?

LY: First, it’s important to acknowledge that introducing impactful generative AI solutions into an organization is not easy, and scaling can be downright difficult. Compared with the AI maturity of an organization’s people, processes, and security policy, the technology adoption might even be the least challenging part.

In fact, Lenovo’s global study of CIOs showed that 76% of CIOs say their organizations do not have an AI-ready corporate policy on operational or ethical use. There are few silver bullets or quick fixes, so it’s an important step to recognize that this is an incremental process and an important disclosure to customers. AI service providers should be transparent about what tools, data, and methods are being used, and companies should consider establishing their own AI policies with a stance on usage.

Lenovo’s own processes are geared toward ensuring secure, ethical, and responsible AI development and usage, and these best practices underpin our work with customers on their AI adoption journeys.

ZDNET: How does AI washing undermine the true transformative potential of AI technology?

LY: AI washing can conflate the embellished [with] reality. This perpetuates the risk of AI fatigue that, in aggregate, would deepen the “trough of disillusionment” and hinder the progress and investment into real AI innovation.

That’s why I believe it’s important to take a practical and pragmatic approach to AI implementations. We exacerbate the distrust and negative effects of AI washing when AI is treated as an abstract concept without tangible results.

Also: This Lenovo 2-in-1 is one of the most versatile business laptops I’ve tested

At Lenovo, we’re all about delivering meaningful business outcomes with proven, hands-on experience, and connecting the deployment of technologies like AI directly to those outcomes.

ZDNET: What strategies can enterprises use to communicate about AI in a way that aligns with their actual capabilities and achievements?

LY: Enterprises should focus on fact-based messaging, transparency, education, and real-world use cases to communicate their AI capabilities accurately. Share specific metrics, case studies, and real-world examples that demonstrate the AI impact on your business and your talent. Be transparent about the development process, data sources, and decision-making.

Also: How Lenovo works on dismantling AI bias while building laptops

At Lenovo, we believe hands-on experience is crucial, and we’ve scaled dozens of real-world use cases with tangible business results to show for it. When you’ve delivered millions of dollars to the bottom line, there’s no need for AI washing — proven methods and measurable impact speak for themselves.

ZDNET: What role does transparency play in building trust around AI initiatives in companies?

LY: Transparency is the cornerstone of trust in AI initiatives. It demystifies the technology, aligns expectations with reality, and brings people along as advocates rather than skeptics. This openness not only reassures stakeholders, but also encourages informed collaboration, driving innovation and confidence in AI’s genuine capabilities.

ZDNET: Can you discuss any specific measures Lenovo has taken to avoid AI washing in its communications and practices?

LY: At Lenovo, we demonstrate our transparency hands-on, by allowing stakeholders to see AI’s real-world impact firsthand – whether it’s in the contact center, on the manufacturing floor, or in the sales bullpen. We reinforce trust in our AI solutions and methods through direct user experience.

Lenovo has been deploying AI in our own IT environment for more than a decade, and our culture of drinking our own champagne stretches decades before that, so this is not new to us!

ZDNET: How does Lenovo address the ethical considerations involved in developing and deploying AI solutions?

LY: AI is changing the business landscape, and Lenovo recognizes the importance of AI that is implemented safely and responsibly. Last year, Lenovo established the Responsible AI Committee, a group of employees representing diverse backgrounds across gender, ethnicity, and disability. Together, they review internal products and external partnerships using the core principles of diversity and inclusion, privacy and security, accountability and reliability, explainability, transparency, and environmental and social impact.

Also: Why AI solutions have just three months to prove themselves

We apply real rigor to our own solutions, as well as the work of our partners, where diversity, equity, and inclusion (DEI) is a priority. We use dedicated tools to evaluate bias in data and identify sub-populations that might be underrepresented or somehow segmented. One such tool is AI Fairness 360, an open-source software that evaluates AI algorithms and training data to mitigate bias.

ZDNET: What are some common misconceptions about AI that contribute to AI washing, and how can they be addressed?

LY: Let’s talk about three myths:

Myth: AI can solve any problem and immediately delivers huge ROI.

Reality: AI excels in specific tasks but no algorithm is a universal solution. Its benefits generally accrue over time with careful iterations. We address this with our people-centric strategy to educate stakeholders about AI’s strengths and limitations, highlighting our own practical experiences in deploying AI and the real use cases that continue to accrue ROI over time as learnings are incorporated.

Myth: AI works autonomously without human oversight.

Reality: Most AI solutions, especially with generative AI, require a level of governance for effective implementation and ethical use. Again, our people-centric strategy comes into play here by placing humans in the loop as the experts to guide the usage of AI and interpret its results.

Myth: More data means better AI.

Reality: The quality and relevance of your data set are more critical than the sheer volume. Our AI services practice helps customers assess their data readiness for AI and ensure their data estates are able to achieve the business outcomes they desire. If not, then our data services will help get them there.

ZDNET: What are the potential risks of not addressing AI washing in the tech industry? How can industry standards and regulations help mitigate the risks associated with AI washing?

LY: Industry standards play an important role in mitigating AI washing. Earlier this year, Lenovo signed the UNESCO Recommendation on the Ethics of Artificial Intelligence, a commitment to “prevent, mitigate, or remedy” the adverse effects of AI, in addition to specific measures to fix issues in AI solutions that may have already been launched in the market.

This May, we joined the Government of Canada’s Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems. These are important commitments that hold the industry accountable for not only the safe and ethical use of AI, but [also] its explainability and transparency.

ZDNET: What future trends do you predict in the field of AI ethics and governance?

LY: AI ethics and governance will continue to evolve and tighten, and businesses at the forefront of AI adoption will need to take decisive action to guide the rest of the industry on ethical, responsible AI use. Specifically, let’s look at three areas.

  1. Stricter regulations and accountability: Businesses will need to comply with increasingly comprehensive regulations on data privacy, bias, and ethical use. They will establish clearer accountability –- through Chief AI Officers, Chief Responsibility Officers, or otherwise — and corporate policies will be established, ensuring responsible AI practices. They will likely seek trusted AI advisors to help define, benchmark, and implement these policies.

  2. Ethical guidelines and transparency: The industry will move toward standardized ethical principles. Organizations will mandate transparency, providing clear documentation of AI model training, testing, and validation processes. Independent audits and certifications will be more prevalently used.

  3. Fair and ethical AI by design: Companies will focus on mitigating bias, incorporating fairness techniques, and regular audits into AI development. Ethical considerations will be integrated from the start, ensuring issues are addressed throughout the AI lifecycle. Early adopters like Lenovo will drive these efforts, guiding businesses to adopt best practices and fostering a trustworthy, ethical AI landscape.

Also: The best Lenovo laptops: Expert tested

What do you think? Did Linda’s recommendations give you any ideas about how to ensure quality AI implementations with transparency and solid governance? Let us know in the comments below.


You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.



Source link