The Rise of AI in Everyday Tools: Are We Losing Our Humility?
Preamble
In recent years, we've witnessed an accelerating integration of AI technologies into tools we use daily—everything from grammar correction tools like Grammarly to driving business strategies and decision-making. While this shift has made our work more efficient and streamlined, it also raises an important question: What are we losing in the process?
The AI Takeover in Social Platforms
Consider social media platforms like LinkedIn. These spaces, once used to showcase our achievements and personalities, are now adapting AI tools that assist in how we present ourselves. AI-generated writing suggestions, profile insights, and even engagement strategies are becoming the norm. On the surface, these tools seem like a great way to polish our profiles and enhance our online presence. But what happens when we allow AI to shape the very way we represent ourselves? By relying heavily on AI for content generation and recommendations, we risk losing the imperfections that make us human. Those quirks and individual characteristics that define who we are—whether it's an unconventional turn of phrase or a unique way of storytelling—are slowly being sanded down. Large language models are designed to optimise and refine, but they can never truly replicate the depth of context that shapes our individuality.
Social Media and Self-Presentation
To give you a sense of just how significant this issue is, consider that as of 2023, over five billion people were using social media worldwide. That's over 60% of the world's population! Of these users, a majority report that they use social media to craft a specific narrative about their lives—whether for professional purposes like on LinkedIn or more casual self-presentation on platforms like Instagram, X (AKA Twitter) and Facebook (Statista, 2023). As AI continues to be integrated into these platforms, we should be mindful of how this technology shapes the stories we tell.
A Subtle Erosion of Humility?
I'm not against the use of AI. In fact, I believe it's an inevitable part of our future, and it can offer tremendous benefits when used wisely. But there's a subtle but important distinction to be made. In many cases, AI strips away the rough edges, individualism, and the little nuances of human error or imperfection that reflect who we really are.
Take Grammarly, for instance. It provides users with an option to "acknowledge" when content is AI-assisted. This level of transparency allows the user to make an informed decision about their work. However, many of the large social platforms we use daily, like LinkedIn or Facebook, do not offer such transparency when AI is used to tweak or even generate content. As these platforms quietly integrate AI, we unknowingly start to conform to what the algorithm deems "best practice," potentially losing the uniqueness of our self-presentation.
The Accessibility of Software Development and Its Risks
Beyond the way AI is affecting self-presentation, it's also reshaping technology and the software development behind it. AI-assisted tools, such as GitHub's Copilot and AI-driven no-code platforms, have made it easier than ever for people who don't traditionally see themselves as developers to produce content like websites, apps, and even video games. In 2022 alone, GitHub reported that over 1.2 million developers used its Copilot feature within its first year of release, and the no-code market is expected to reach a value of $45.5 billion by 2025 Source. This growing trend of AI-driven, self-taught software creation is opening the doors to a more diverse group of creators. On one hand, it's exciting to see more people acquiring programming skills and trying their hand at development. However, I feel there is also a downside to this increased accessibility. Many of these new developers, while entrepreneurial, creative, and enthusiastic, may lack the foundational understanding of security, scalability, and best practices that traditionally come with formal education or years of experience. As a result, we could see an influx of poorly implemented, insecure, and unsustainable solutions entering the tech landscape. Despite these challenges, the potential for innovation and creativity is vast. By addressing these concerns and providing support and education, we can ensure that this democratization of software development leads to a more inclusive, secure, and sustainable tech ecosystem.
Risks to Security and Scalability
Using AI-generated code without experience may lead to overlooking common security vulnerabilities like SQL injection, cross-site scripting, or improper authentication flows. The 2023 Verizon Data Breach Investigations Report (DBIR) reveals that approximately 68% of security breaches can be linked to code vulnerabilities (Verizon DBIR 2023). When inexperienced users rely on AI to generate large amounts of code, these risks are heightened. While AI can provide fast solutions for building scalable systems, it's essential to consider the broader context and long-term performance to avoid incurring costly technical debt.
The increasing popularity of AI-assisted software development tools has made development more accessible, yet we must remain vigilant about potential risks. Inexperienced developers may inadvertently introduce vulnerabilities or inefficiencies into their code, posing security and performance risks in the future. The tech industry may face a surge of quick fixes and "good enough" solutions that fall short of the high standards needed for robust and reliable software.
By recognizing these challenges and addressing them proactively, we can harness the power of AI in software development while mitigating potential pitfalls. With the right guidance and awareness, developers can leverage AI to create secure, efficient, and resilient software solutions that meet the highest standards of quality and performance.
Solutions for Responsible AI Usage
As AI-assisted development grows, it's important for new developers to continue learning and developing their skills beyond the tools they're using. AI can be a powerful enabler, but it can't replace the need for a deeper understanding of the software development lifecycle and the complexities that come with building secure and scalable solutions.
- Commit to Lifelong Learning: AI can help streamline many aspects of software development, but there's no substitute for learning the fundamentals. New developers should focus on understanding key principles of secure coding, architecture, and best practices.
- Utilise AI with Transparency: When using AI-driven tools, transparency is key. Just as Grammarly prompts and allows users to acknowledge AI involvement, other platforms should consider providing clear indications when content is AI-generated, giving users the chance to review, learn, and take responsibility for the output. Platforms can ensure transparency when AI is used to tweak or generate content by clearly labelling AI-assisted content, implementing detailed disclosure protocols, offering users the option to opt out of AI-generated suggestions, and engaging in ongoing dialogue with their user communities to adjust transparency and labelling practices accordingly.
- Promote Best Practices: AI platforms should integrate educational prompts that help users learn best practices in real-time. For instance, GitHub Copilot could include explanations about why certain code suggestions might be more secure or performant, helping developers understand the "why" behind the suggestions.
- Understand that Large Langage Models (LLM) Platforms are not Knowledge models: Large Language Models (LLMs) like GPT are powerful tools for generating human-like text. Trained on vast datasets, they can produce impressive outputs. However, it's important to remember that LLMs are not knowledge models. They lack true understanding and cannot verify the accuracy of the information they generate. While they can be valuable for creating content, their outputs should be critically assessed and fact-checked. It's crucial to use them with human oversight to mitigate the risk of misinformation and biases.
The Bigger Picture: A Cautionary Note
AI is reshaping our interactions with technology and each other. While these advancements offer convenience, it's essential to approach them mindfully. Human individuality is immeasurable, these models and algorithms cannot fully encapsulate what makes us unique. Let's focus on how to best utilize AI thoughtfully, preserving the human connection that defines us.
Call to Action
AI is an integral part of our lives, but it's important to remember the human touch. Whether using AI for writing, coding, business, or social media, pause and evaluate the outcomes. Ask: Does this reflect my essence? Is it secure and scalable? View AI as a resource, not a dependency, and use the extra time it provides to invest in skills, creativity, and uniqueness. Embrace the opportunities it brings while maintaining our humanity.
References
- Verizon 2024 Data Breach Investigations Report: https://www.verizon.com/business/resources/reports/dbir/
- Statista, Number of Worldwide Social Network Users: https://www.statista.com/statistics/278414/number-of-worldwide-social-network-users/
- Statista, Low-code development platform market revenue worldwide from 2018 to 2024: https://www.statista.com/statistics/1226179/
- LinkedIn, Enhance your profile with LinkedIn’s AI-powered writing assistant: https://www.linkedin.com/help/linkedin/answer/a1444194
- Astronort, Astronort: Why now?: https://www.astronort.com/articles/astronort-why-now
AI Acknowledgement
This article was developed using a combination of personal insights and AI assistance via OpenAI's ChatGPT. Specific prompts used included topics such as "AI and human interaction in social platforms," "impact of low-code/no-code on software development," and "statistics on social media usage." ChatGPT provided suggestions on refining the arguments, organising sections, and adding credible references. While AI facilitated research and suggestions, the opinions and conclusions expressed here are my own.
To help my writing process, I used these Grammarly AI prompts:
Prompts created by Grammarly
- "Identify any gaps"
- "Shorten it"
Prompts I wrote
- "expand on my ideas around how platforms ensure transparency when AI is used to tweak or generate content? "