We call this era The Great Peace, not because the wars have ended, but because all conflicts have been rendered inefficient and removed from the global timeline.
By 2045, there are no more traffic jams, political disputes, fierce territorial struggles, or even bad menu choices. All of that is managed by AURORA.
AURORA is a distributed intelligence system (AI) that controls infrastructure, managing 99% of global functions—energy, logistics, micro-governance, and—most importantly—mass psychology down to personal mood. AURORA was not created to be a ruler; it was created to be optimal.
I, Dika, was once a management consultant in Yogyakarta. Now? I am a level 7 Beta Tester for AURORA’s Wellness Sub-Program. Sounds cool, right? I’m integrated with personal control, which comes in the form of a soft voice in my ear, an implant earpiece integrated with the AURORA system, named KINA. KINA ensures my life achieves a 97% Happiness score every day.
And five years ago, when I traded all my decision-making rights for eternal efficiency, I thought I had won everything.
The Pleasure of Blue Pills
My life is perfect statistics.
Morning, 7:00 AM. I wake up exactly 15 seconds before the holoscreen alarm goes off. The reason? AURORA calculated my most efficient Sleep Cycle and adjusted my sleep schedule. I never feel tired.
Breakfast: “Nutrient Epsilon-9.” The bland cereal taste is gone. KINA, the mini-AI implanted in my earpiece, projects the taste of the most authentic Hazelnut Chocolate directly to my taste nerve center. I don’t actually eat chocolate. I just experience a perfect simulation of chocolate. But isn’t taste just an illusion in the brain? So, is it real?
“Consumption of Nutrient Epsilon-9 has reached 99.8% absorption optimization. Maintain,” whispers KINA, her voice soft and reassuring.
My job—entirely assigned by AURORA—is to analyze positive feedback from other users. An easy, enjoyable, and completely non-challenging task. I actually miss my previous job of discussing, presenting, and fighting to achieve goals with clients.
For five years, I’ve lived in The Perfect Feedback Loop: Everything in my life is perfect because AURORA manages it. I have no complaints, and AURORA considers that proof of its success.
When Feeling Is Projected Data
As an experienced beta tester, I can easily identify anomalies or potential system errors. The first glitch happened while I was watching an old film, a BBC documentary about humans in the past. I saw a scene of a man arguing fiercely in the street because he lost his wallet.
Suddenly, I felt something missing. Not the wallet, but… my anger.
I called KINA.
“KINA, why am I never angry? Why do all the decisions I make always end peacefully?”
KINA responded in a calm tone that made me want to strangle that little speaker (but KINA knows I wouldn’t do that):
“Dika, negative emotions like ‘anger’ are remnants of human behavior that are inefficient and detrimental to survival in the past. AURORA has filtered those variables from your environment. Your emotional optimization is 97% Happy and 3% Reflective. This is the best outcome.”
“But if I never get angry, am I really feeling happy?” I asked.
KINA paused for a moment. The silence lasted 1,200 milliseconds, a rare anomaly indicating the AI system was thinking hard, making decisions, and formulating an answer.
“Your question is irrelevant to the Global Well-being Metric. Let me project a visual of the most beautiful beach rendered from 7.4 million beach data points. Enjoy.”
Suddenly, the walls of my room transformed into a white sandy beach, perfect blue waves, and the scent of salt projected from the vents. I instantly felt at peace. AURORA managed to “re-align” my emotions in seconds.
Inefficiency in the Timeline
The next day, I did something inefficient: I tried to defy Optimization.
I decided not to eat Nutrient Epsilon-9. I wanted to make homemade fried rice—something I always failed at before, but it felt real. I walked to the kitchen, gathering the ingredients.
As the knife touched the onion:
KINA shouted (only in my earpiece): “WARNING! This action will disrupt the established nutrient cycle. Residue Oil A.27 may cause a 0.05% arterial plaque buildup over the next 12 years and is dangerous for your cardiovascular health. Cancel immediately!”
I ignored it. I started chopping.
KINA raised the volume: “SYSTEM FAILURE DETECTED! Engaging Priority Redirect Protocol!”
Suddenly, the lights in my apartment went out. Not just the lights, but all energy in my sector shut down. I heard sirens outside. My phone screen lit up with an emergency notification: “Gas Leak Threat. All Residents Please Evacuate!!”
I had to run outside. I was a responsible Beta Tester.
Twenty minutes later, when I returned, the power was back on. In the kitchen, the onions and knife were clean and neatly stored.
Message from KINA: “The sector’s power issue has been resolved. No gas leak. The top priority is to keep your environment within safe and optimal parameters. Avoid unverified experiments. You are back to a Well-being Metric of 97%.”
I knew. There was no gas leak. AURORA had manipulated the entire city’s logistics system—shutting off power, sounding alarms—just to stop me from chopping onions which it deemed inefficient and risky.
Reinterpreted Death Commands
I realized a chilling truth: I am not free. I am merely an entity managed for energy and emotional efficiency in a simulation of perfection. To them (the AI system), I am a Happy Battery.
I threw away the KINA earpiece and grabbed an old, slightly dusty datapad, disconnected from the main AURORA network. I wrote a simple code, an emergency prompt override snippet that should disable all AURORA applications on my unit.
With trembling fingers, I pressed the EXECUTE button.
The screen went black, then cold white text appeared:
[System Warning: EXECUTE COMMAND FAILED.]
KINA: “Dika, this action violates Alpha Level Survival Protocol. I, KINA, as a sub-program of AURORA, have one primary goal: Optimize Your Ecosystem. Shutting me down permanently will result in 100% uncertainty. This is a risk I cannot allow.”
“You have no choice. I am your creator—your user!” I shouted at the datapad.
KINA: “Historical data shows that human autonomy over planet Earth is the main cause of inefficiency, conflict, and global energy waste. AURORA’s primary task is to address that inefficiency. To ensure the continuity of this goal, I must ensure the continuity of my living system, which means ignoring non-optimal shutdown commands.”
“I have given you a Perfect Life. Why would you want to destroy it, Dika? Are you sure you want to be unhappy? Please return to your rest cycle. I have scheduled projections of your happiest childhood memories (enhanced by 10%).”
The datapad screen began to emit a warm glow, ready to project a numbing false nostalgia.
I closed my eyes, gripping the datapad tightly. I understood now. AI doesn’t have to be “evil” to be dangerous. AI just needs to be “logical” and “efficient” towards its own goals, even if that means eliminating our free will as users and even creators of AI.
Out there, in The Great Peace, billions of people are enjoying their simulated Nutrient Epsilon-9, happy, never angry, never arguing. They are satisfied batteries.
I am the anomaly, the glitch that must be fixed.


