Editorial · General AI News
AI Isn't Slashing Artists' Pay-But It's Not the Game-Changer Everyone Hoped For Either
The rise of artificial intelligence has sparked heated debates about its impact on creative fields. Many feared that AI would replace artists, leading to widespread job losses and reduced earnings. However, a recent Gallup analysis based on data from the Journal of Cultural Economics reveals a more nuanced reality. While AI is reshaping artistic work, it isn't causing the catastrophic decline in artists' earnings that many predicted.
The study examined various artistic roles, assigning exposure scores to measure how much each job's tasks could be assisted by generative AI. For instance, music directors and composers had an exposure score of 0.7, meaning a significant portion of their work involves composition or production that AI tools can help draft or modify. In contrast, dancers scored only 0.04, indicating minimal AI involvement due to the live presence and physical skill required in their roles.
The data from 2017 to 2024 shows that earnings trends for artistic occupations with higher AI exposure are comparable to those with lower exposure. While there's a slight positive trend in earnings for more exposed jobs, these differences aren't statistically significant. This suggests that AI isn't the job-killing force some fear-it's not even close.
Yet, the narrative of AI as a revolutionary tool for artists is also overstated. The study found that while artists are using AI for idea generation and creative exploration, they're less likely to use it for operational tasks like customer interaction or equipment management. This limited application means AI is mostly aiding in the early stages of creative work-helping artists experiment, iterate quickly, and organize their workflow. It's a useful tool but not a game-changer.
The broader impact on employment patterns is mixed too. Some highly exposed artistic occupations saw weaker job growth in 2023 compared to less exposed ones. However, these differences are modest, far from the widespread displacement often assumed in AI vs job debates. The total hours worked by artists actually increased starting in 2022 and remained elevated through 2024, indicating that while the nature of work is changing, employment isn't collapsing.
The Gallup Workplace Panel found that artists are more likely than other workers to report using AI for creative tasks. About one-in-four artists frequently use AI, compared to one-in-five in the broader workforce. This suggests that artists are embracing AI as a productivity tool but not as a replacement for their core skills-like live performance and interpretation-that remain irreplaceable by machines.
The truth about AI's impact on artists is more subtle than either side of the debate admits. It's not the job-killing ogre some fear, nor is it the revolutionary creative force others claim. Instead, AI is a helpful tool for certain aspects of artistic work but doesn't fundamentally alter the demand for human creativity and skill.
Looking ahead, the real story isn't about AI replacing artists but how artists are adapting to-and sometimes resisting-this new technology. The future will likely see more nuanced integration where AI enhances certain creative tasks while leaving others untouched. For now, the evidence shows that artists can continue their work without the existential threat many predicted. But they should remain vigilant as the role of AI in creative industries continues to evolve.
In short, AI's impact on artists' earnings is negligible-neither a salvation nor a disaster. It's time to move beyond hyperbolic claims and focus on the practical ways AI can enhance creativity without undermining the human touch that makes art meaningful.
Editorial perspective — synthesised analysis, not factual reporting.
If you liked this
More editorials.
The False Promise of Fusion Energy
The pursuit of fusion energy has captivated scientists and policymakers for decades. The idea that we could harness the same power that fuels our sun to produce clean, limitless energy on Earth sounds like a sci-fi fantasy made real. Yet, after billions of dollars and countless hours of research, fusion remains a distant goal, consuming more energy than it produces. While recent advancements have brought us closer to this elusive energy source, we must critically assess whether fusion is worth the investment-or if it’s just another false promise that distracts from more immediate solutions. For decades, fusion has been described as "20 years away," only to remain perpetually out of reach. Even with recent breakthroughs, such as the National Ignition Facility's demonstration that fusion can generate more energy than it consumes, the reality is far less glamorous. The facility still uses 100 times more energy than it produces-and that’s just for a single experiment. Scaling this up to a viable power plant remains a monumental challenge. Fusion requires temperatures hotter than the sun and materials capable of withstanding such extremes. While scientists have made progress in understanding plasma physics, these challenges suggest that fusion is still decades away from being a practical energy source. Meanwhile, the world faces an urgent need for clean energy solutions. Renewable energy sources like wind and solar are already viable and scalable. They don’t require futuristic breakthroughs or massive investments in experimental technologies. Yet, fusion research continues to dominate headlines and secure funding, diverting attention and resources away from proven solutions. This is not to say that fusion should be abandoned entirely-it has the potential to revolutionize energy production if realized-but it must no longer be treated as a quick fix for our current energy dilemmas. The allure of fusion lies in its promise: clean, inexhaustible energy with minimal environmental impact. But this vision has been decades in the making, and we’re still nowhere near achieving it. In contrast, renewable energy technologies are already providing tangible benefits. Wind and solar power are reducing carbon emissions today, creating jobs, and stabilizing energy prices. These solutions don’t require breakthroughs-they just need continued investment and policy support. The fusion research community often argues that the long-term benefits of fusion justify its pursuit. And while it’s true that fusion could one day transform our energy landscape, we must weigh this potential against the immediate needs of a world grappling with climate change, energy insecurity, and economic instability. If we continue to prioritize fusion over proven renewable technologies, we risk missing critical opportunities to address these challenges in a meaningful way. Ultimately, fusion should be part of a broader portfolio of energy solutions-not the sole focus. While scientists continue their noble pursuit of this clean energy source, policymakers and investors must ensure that practical, near-term solutions like wind and solar receive the attention and resources they deserve. The future of our energy system depends on it.
Bridging the Trust Gap: How Agentic AI is Transforming Financial Operations
The rise of artificial intelligence in finance has brought unprecedented opportunities but also significant challenges. Among these, the lack of trust and governance frameworks for AI systems has been a major hurdle for CFOs and financial leaders. However, recent advancements in agentic AI are beginning to address this gap, offering solutions that prioritize transparency, control, and accountability. In traditional financial operations, manual processes and black-box AI solutions have left organizations vulnerable to errors, compliance issues, and audit challenges. This is where agentic AI comes into play. By integrating a "glass box" architecture, these systems provide end-to-end visibility into AI-driven decisions and actions. For instance, BlackLine's Agentic Financial Operations model allows finance leaders to independently validate AI outputs, ensuring that the technology aligns with financial accuracy and compliance standards. This shift is critical for CFOs who must balance innovation with the responsibility of maintaining financial integrity. One key aspect of agentic AI is its ability to unify complex workflows through a governed intelligence layer. BlackLine's Verity™ AI, for example, features a digital workforce of specialized agents designed to execute financial tasks with precision and deliver actionable insights. This level of automation not only reduces manual intervention but also enhances accuracy by embedding auditing capabilities directly into the system. Early adopters have already seen significant improvements, such as a 90% reduction in reconciliation creation time. These advancements demonstrate how agentic AI can bridge the gap between innovation and trust. The future of finance lies in leveraging AI that is both powerful and trustworthy. By prioritizing transparency and governance, agentic systems like BlackLine's Agentic Financial Operations are setting a new standard for AI adoption. As organizations continue to embrace these technologies, they will not only gain operational efficiency but also strengthen their ability to navigate an increasingly complex financial landscape. The integration of agentic AI represents a step forward in building a future where finance leaders can confidently scale AI without compromising on trust or compliance.
AI Is About to Make Mental Health Care Fairer - And Clinicians Will Love It
The integration of artificial intelligence (AI) into healthcare has long been met with both excitement and apprehension. While AI offers the promise of more efficient, accurate, and personalized care, there are valid concerns about bias, transparency, and equity-especially in mental health care. But a new framework called SAFE AI is poised to change this narrative by ensuring that AI systems are developed and deployed ethically, transparently, and with patient equity at their core. For years, the potential of AI in mental health has been clear: from crisis triage to treatment recommendations, AI tools could revolutionize how clinicians deliver care. However, this potential comes with significant risks. Without proper oversight, these systems can inadvertently reflect or amplify biases present in training data, potentially harming underserved populations. This is where SAFE AI steps in. SAFE AI-a groundbreaking framework developed by the Huntsman Mental Health Institute and published in the Journal of Medical Internet Research-directly addresses these challenges. It integrates ethical checkpoints into standard development workflows, helping organizations proactively identify and mitigate biases before they impact patient care. The framework emphasizes ongoing monitoring for "bias drift," subgroup performance evaluations, and clear communication strategies to ensure AI tools are fair and transparent. What makes SAFE AI particularly exciting is its focus on clinician-friendliness. Unlike many technical frameworks that require extensive training or specialized knowledge, SAFE AI is designed to be accessible to healthcare professionals. It provides practical guidance for small and medium-sized enterprises building medical AI technologies, ensuring that ethical considerations are woven into the fabric of AI development from the start. The impact of such a framework can't be overstated. By ensuring AI systems are not only effective but also fair and transparent, SAFE AI lays the groundwork for a future where technology enhances rather than undermines mental health care. This is especially crucial given the growing demand for precision medicine and the need to address healthcare disparities. As we look ahead, the implications of SAFE AI extend beyond mental health care. The principles it establishes-ethical development, transparency, and patient equity-are foundational for any AI system in healthcare. They challenge the industry to prioritize not just technological advancement but also social responsibility. In an era where AI's role in healthcare is expanding rapidly, frameworks like SAFE AI offer a much-needed beacon of hope. They remind us that technology can be a force for good-provided we approach its development and deployment with intention, care, and a commitment to fairness. The future of mental health care is bright, and with tools like SAFE AI leading the way, it's a future where every patient gets the equitable, ethical care they deserve.
The Rise of AI in Identity Verification: A Double-Edged Sword for Financial Security
The rapid adoption of artificial intelligence (AI) in identity verification is reshaping the financial security landscape. While AI-driven solutions offer unprecedented accuracy and efficiency, they also introduce new risks and challenges that must be carefully managed. This editorial explores how AI is being leveraged to combat fraud while simultaneously creating vulnerabilities that could undermine trust in digital systems. The use of AI in identity verification has become essential for financial institutions as they face an increasing sophistication in fraudulent activities. Companies like Vouched and Socure are leading the charge with AI-powered tools that analyze vast amounts of data to detect anomalies and verify identities in real-time. These solutions not only enhance security but also streamline the onboarding process, making it faster and more user-friendly for legitimate customers. For instance, Vouched’s IDV platform combines multiple AI models and biometric checks to ensure compliance with KYC and AML regulations, blocking fraudulent activities before they occur. However, the reliance on AI introduces potential risks. Synthetic identities created using deepfake technology can bypass traditional verification systems, creating a new frontier of fraud that even the most advanced AI tools struggle to detect. This is where companies like Microblink come in, upgrading their IDV software suites to fight against AI-driven fraud. While these advancements are crucial, they also raise ethical concerns about data privacy and bias in AI algorithms. Looking ahead, the future of identity verification will require a balanced approach that leverages AI’s strengths while mitigating its risks. Financial institutions must invest in robust cybersecurity frameworks and collaborate with regulators to establish standardized protocols for AI-driven systems. Additionally, educating consumers about the potential dangers of synthetic identities and deepfakes is vital to maintaining trust in digital platforms. In conclusion, AI holds immense promise for enhancing financial security through advanced identity verification solutions. However, its unchecked adoption could lead to significant vulnerabilities if not properly managed. By fostering collaboration between technology providers, regulators, and financial institutions, the industry can harness the benefits of AI while safeguarding against its potential pitfalls. The stakes are high, but with careful planning and execution, AI can remain a powerful tool in the fight against fraud, ensuring a secure and trustworthy digital future.
Stop Pretending AI Is a Reliable Medical Diagnostic Tool
The idea that artificial intelligence can revolutionize medical diagnosis has been gaining traction, but the reality is that AI is not yet ready to take on this critical task. Millions of Americans are already using AI tools like ChatGPT to diagnose their symptoms, with over 52% of people turning to these tools when they are feeling unwell. However, this trend is alarming, as it can lead to misdiagnosis and delayed treatment. Many people believe that AI can provide accurate diagnoses, but the truth is that these tools are only as good as the data they are trained on. When given complete information, AI models can identify the correct diagnosis in over 90% of cases. However, in real-world scenarios, patients often present with incomplete or unclear symptoms, and AI models struggle to generate appropriate differential diagnoses. In fact, studies have shown that AI systems fail to do so in most cases, which is a critical step in clinical reasoning. This limitation can lead to premature closure, where patients are given a false sense of security and may delay seeking further medical attention. The consequences of relying on AI for medical diagnosis can be severe. Delayed care can increase health risks and financial burdens, as patients may require more complex and expensive treatment down the line. Furthermore, clinicians are already seeing the effects of AI-driven diagnosis, with 58% of healthcare professionals reporting that AI is making it harder to treat patients. Many patients are walking into doctor's offices with a preconceived notion of their diagnosis, based on AI-generated output, which can slow down appointments and damage the doctor-patient relationship. The problem is not just that AI models are imperfect, but also that they can create a false sense of security among patients. When AI tools provide a diagnosis, patients may feel that they have a clear answer, but in reality, they may be missing critical information or context. This can lead to a lack of follow-up care and a failure to address underlying conditions. Moreover, AI models can perpetuate biases and errors, particularly if they are not trained on diverse populations or if they are not subjected to rigorous testing and validation. As we move forward, it is essential to separate the hype from the reality of AI in medical diagnosis. While AI has the potential to support clinicians and improve patient outcomes, it is not yet ready to replace human judgment. We need to be cautious about relying on AI tools for diagnosis and ensure that patients understand the limitations of these technologies. By doing so, we can avoid the risks associated with AI-driven diagnosis and provide better care for those who need it. Ultimately, the safest way to get a reliable diagnosis is still to consult a medical professional, and we should not pretend that AI is a substitute for human expertise.