Code Lyoko is an animated kids show made by some french studio that aired in english on Cartoon Network when i was a kid. in each episode a computer virus named X.A.N.A. manipulates the real world by manifesting in different ways:
- a shapeshifter who clones people and sows distrust among them
- autonomous weapon-clad robot dogs sent out from...somewhere?
- a viral song that gives listeners mini seizures
...and so on, while four high school hacker friends with different skill sets fight back through a VR universe called Lyoko.
to me (and my siblings who teased me for liking it) Lyoko's hideous 3D renderings clash with the 2D how-to-draw-manga real world parts. however i don't think it's too far off from how a lot of virtual spaces must seem to outsiders; i don't like the way Roblox looks at all but it's still somehow incredibly popular. places like 4chan have always been cesspools and yet many people are continually drawn to their allure. in season 2, one unassuming emo boy befriends our crew but then swiftly falls "under the influence" of X.A.N.A. and becomes an antagonist.
unpacking things
this Lyoko place feels like a metaphor for the internet and specifically social media. bad actors absolutely can and do use these places to negatively influence real life, for example:
- short-form social media encouraging toxic behaviors due to having little room for nuance
- war criminals advertising to vulnerable/volatile people using coded language familiar to them
- 4chan's countless harassment campaigns and steve bannon leveraging it for trump's presidential run
- elon musk turning twitter (an already bad place) into "X" (a decidedly worse place)
- FAANG et al. enabling global fascism, slave labor and genocides
and X.A.N.A. feels eerily similar to this generative AI moment we're in. a class of disparate technologies is being sold to and thrusted upon us under a single well-established label in order to replace or "enhance" existing things. there are valid and grey-area uses of this tech, like captioning audio or describing images when a human won't take the time to do it, but it's mostly being wielded as a weapon of class warfare like a wedge alienating us from our labor. AI is currently:
- fabricating digestible scenes (slop) that will validate people's biases
- inducing psychosis and making users form unhealthy attachments
- being used to replace human labor (ultimately just moving the work elsewhere)
- being deployed as disastrous half-baked goods (grok, copilot, mobile fortify, waymo... i could go on.)
but the problems with technology go back much much further than this. under capitalism, most technological advancements seem to pull the wool over our eyes while widening the disparity between upper and lower classes.
what can be done?
For the master’s tools will never dismantle the master’s house. They may allow us temporarily to beat him at his own game, but they will never enable us to bring about genuine change.
— Audre Lorde, 1979
applying this thinking to the internet age of today, how does it hold up when the master's tools are free and open source? software licenses enable us to rework and reclaim it to do near entirely different things. shiny new stuff is usually made of existing independent parts that can be good or evil depending on who you ask, and anything on the web often relies on infrastructure borne of the military-industrial complex, so why can't we use these components to do good things and respect the dignity and sanity of everyday people?
well, the trouble lies in the fact that the tools are socially agnostic; it's the people wielding them who realize the consequences. a friend from my local computer club recommended reading Nick Srnicek and it helped me understand that technology is an accelerant⸺meaning by virtue of its existence it'll be used to advance some kind of agenda. since the 70s, powerful tech companies have had a stranglehold over the global economy and our psyches.
at the end of every Code Lyoko episode they turn back time, and while obviously we can't do that we can ask ourselves: what do we want the digital tech of the future to look like? when digital platforms inevitably fall, what do we imagine in their place? what kind of behaviors do we want our tech to enable and what should be left in the past?
what we've been doing about it
in the past few months Trump's Department of Homeland Security has ramped up to an all-out assault on cities around the US, especially where i live in Minnesota's Twin Cities of Minneapolis and Saint Paul with Operation Metro Surge. i don't need to go into detail, i'll assume you read the news. in response we've seen thousands of our neighbors mobilizing in resistance using Signal group chats to coordinate and both public and private web platforms to compile information.
some of my friends from this club have been holding in-person events to help less tech-savvy folks learn to use these tools and have better digital security practices. others are developing an interactive bot that makes rapid response easier which has been an ongoing collaborative development effort since it debuted. the biggest hurdle for us lies in wresting control from the owner class over how and why technology is used. if their tools and ideologies don't work for us, it's our responsibility to replace them with something better.
in my opinion, where the tools came from is less important than the will of the people but the wrong tools can slow you down. i can't stress enough how important it is to be connected with your neighbors and to educate each other on both class and tech consciousness. if we work together we may actually have a chance of surviving and we can lay the groundwork for potentially stopping this thing. together we can build on and make use of software for the benefit of us, the people.
