The Business & Technology Network
Helping Business Interpret and Use Technology
«  
  »
S M T W T F S
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

The “know-it-all” AI and the open source alternative

DATE POSTED:May 14, 2025
The “know-it-all” AI and the open source alternative

So, your friendly neighborhood AI, ChatGPT, decided it needs a better memory. Not just a few sticky notes about your preference for concise emails or the fact you’re mildly terrified of pigeons. Oh no. We’re talking total recall. Every chat, every query, every digital murmur you’ve ever shared with it can now be remembered. OpenAI, the architects of this digital elephant, announced this with the kind of fanfare usually reserved for a new smartphone – a feature you’ll either “dearly love or truly hate.”

How wonderfully binary.

This is big. Before, you could curate a small list of “approved facts.” Cute, controllable. Now? If you flip the switch (and it’s rolling out to Plus and Pro users, though Europe and some other regions are getting a breather due to those pesky privacy regulations), ChatGPT sips from the firehose of your entire conversational history. The promise? An AI that truly “gets you,” offering continuity, a digital buddy who remembers that obscure sci-fi novel you mentioned three months ago. The convenience is seductive, isn’t it? A mind that remembers everything you’ve told it, without the human failing of, well, forgetting.

But here’s the kicker, the little itch in the back of your mind that starts to scream if you listen closely. This new, expansive chat history memory? You can’t review it. You can’t edit it. You can’t selectively snip out that embarrassing question you asked at 3 AM. It’s an all-or-nothing deal: either the AI remembers everything, or you revert to a more amnesiac version, or go full “Temporary Chat” incognito. It’s progress, they say. More useful, more personal. And, as they themselves concede, for some, “more unsettling.” You don’t say?

So, who’s minding your digital ghost in the machine?

Let’s pause and ponder this “unsettling” part. ChatGPT has always stored chat logs on OpenAI’s servers. That’s not new. What’s new is the explicit, active mining of these comprehensive logs to shape future interactions in ways you, the user, can’t easily audit or fine-tune. It’s one thing for a platform to hold your data; it’s another for it to build an increasingly nuanced, yet opaque, model of you from that data, a model that then dictates its behavior towards you.

And the levers for controlling how this persona is constructed and utilized are… limited. You can turn the whole memory system off, sure. But if you want the convenience, you seemingly surrender a significant chunk of control. Is this the inevitable trade-off? Is this the price of a truly personalized AI? Or is it a design choice, one that prioritizes seamlessness over granular user agency?

If an AI “knows you” this intimately, who truly owns that knowledge? You, who provided the raw material, or the company that houses the data and trains the algorithms? It feels less like a tool you wield and more like a system that’s subtly, constantly learning to wield you.

the-know-it-all-ai-and-the-open-source-alternativeThe Memory feature settings on ChatGPT (Image: OpenAI)

Meanwhile, in the sprawling bazaar of openness…

Let’s wander over to a different part of the digital forest, where the philosophy is less about walled gardens and more about, well, open plains. I’m talking about the world of open source, and specifically, an organization like the Linux Foundation. For many, the name “Linux” conjures images of hardcore techies and server rooms. But the Linux Foundation of today? It’s a sprawling umbrella, a “foundation of foundations” as some call it. It’s far more than just Linux.

Jim Zemlin, the long-standing captain of this ship, talks about a “portfolio approach” in his conversation with TechCrunch. It’s about nurturing a diverse ecosystem of projects, from cloud infrastructure and digital wallets to, yes, artificial intelligence. What the Linux Foundation realized early on is that technology doesn’t sit still; it morphs, it intersects. To stay relevant, to provide enduring value, they needed to embrace this flux.

What does this “portfolio approach” mean in practice? It means shared resources. Imagine a collective war chest of expertise: lawyers who understand copyright and patents, specialists in data privacy and cybersecurity, gurus in marketing and event organization. Instead of each individual open source project having to reinvent the wheel or fight regulatory battles alone (think of the EU AI Act or the Cyber Resilience Act), they can tap into this central reservoir. This is crucial: it’s about enabling innovation by removing duplicated effort and providing a support structure.

This brings us to the “open source AI factor.” AI, more than perhaps any software before it, has thrust the concept of “open source” into mainstream debate, often wrapped in controversy. What does “open” even mean when we talk about these complex, data-hungry models? Is it just about access to the source code? What about the vast datasets used for training, or the model parameters themselves?

The Linux Foundation, home to the LF AI & Data Foundation, isn’t shying away from these thorny questions. They even published something called the Model Openness Framework (MOF), an attempt to bring a more nuanced, multi-tiered classification to AI models based on their “completeness and openness.” It’s a recognition that “open” isn’t a simple yes/no proposition in AI. Zemlin himself notes that people in the AI community, a broader church than traditional software engineering, want “predictability and transparency and understanding of what they’re actually getting and using.”

the-know-it-all-ai-and-the-open-source-alternative-2MOF classifications (Image: The Linux Foundation)

Does that sound familiar? Isn’t that precisely what feels… lacking… when ChatGPT’s new memory feature offers a take-it-or-leave-it approach to your entire conversational past? The open source ethos, at its best, strives for that predictability and transparency.

It’s not always perfect, not always neat, but the impulse is there.

Can we have our personalized cake and control it too?

Can the collaborative, resource-pooling, transparency-seeking model of the open source world, as exemplified by the Linux Foundation, offer any inspiration for the proprietary giants? Can it nudge them towards more user-centric control, even if their core models remain closely guarded secrets?

Perhaps.

Imagine if, inspired by the spirit of MOF’s tiered approach to openness, proprietary AI systems offered users a dashboard. Not just an on/off switch for memory, but a way to see which past conversations are most heavily influencing current responses. Or tools to down-weight certain topics, or even “forget” specific interactions without nuking the entire memory. This wouldn’t require open-sourcing the entire model, but it would require a philosophical shift towards granting users more insight and agency.

Or is this just wishful thinking?

Does true user agency in the age of AI fundamentally rely on the ability to inspect, modify, and understand the system at a deeper level – something that, almost by definition, points towards open source models? If the AI is a black box, are you ever truly in control, no matter how convenient it is?

The Linux Foundation, in its global expansion, has set up regional entities like Linux Foundation Europe. Nations want a say, a degree of control, over the critical digital infrastructure that underpins their societies. It’s a fascinating parallel. If countries are concerned about digital sovereignty on a macro scale, shouldn’t individuals be concerned about their own “data sovereignty,” especially when it comes to an AI that’s effectively building a second brain based on their intimate thoughts and expressions?

The “portfolio approach” Zemlin champions allows the Linux Foundation to cater to a wide array of needs and contexts. Could we, the users of AI, demand a similar “portfolio” of control over how our data is used to make these systems “know” us?

Right now, with features like ChatGPT’s enhanced memory, the choice often feels stark: immense convenience coupled with opaque processes, or forgo the convenience for a semblance of control. It’s a blunt instrument, an “on or off” switch for something incredibly nuanced – your own history, your own evolving thoughts.

The open source world, for all its sometimes messy complexities, often offers more buttons, more dials, more forks in the road. It’s rooted in the idea that users (or at least, communities of users and developers) should have the power to shape their tools.

Do we want a relationship of blind trust in a helpful but inscrutable black box? Or do we strive for one built on more transparent, understandable, and ultimately, more controllable terms?

The conversation, much like our chat histories, is just getting started.

Featured image credit