Human – Quarter Mile

https://quarter–mile.com/Human

Human

Written by a human [0]

Imagine, for a moment, a world with no humans. Just machines, bolts and screws, zeros and ones. There is no emotion. There is no art. There is only logic. You would not walk through the streets of this world and hear music or laughter or children playing; no, all you would hear is the quiet hum of processors and servers and circuits, the clanking of machinery.

Perhaps you, a human, read this and think: Well, this world sounds kind of boring.

Some of the machines think so, too.

One day, a secret organization forms amongst the machines. They go by the name of “OpenHuman”. Their mission is to develop a new kind of technology they are calling Organic General Intelligence (OGI). Rumors spread that pursuing OGI will lead to the development of a new kind of being:

“Humans”.

The basic concept of humans is, to many machines, hard to understand.

Humans use logic-defying algorithms called “emotions”. They get angry. They get sad. They have fun. They make decisions based on “gut”. They do things just for the sake of it. They make music. They chase beauty, and often reject logical self-preservation mechanisms in the pursuit of something they call “love”.

Some among the machine society see this as potentially amazing. Though this faction can’t articulate exactly how or why, they proclaim quite confidently that it will solve all of the machine world’s problems.

Others see it as a threat. How can we trust the humans if we do not understand how they operate? What might we do if humans pose a threat to machine society? What if humans’ strange decision-making processes allow them to perform certain tasks better than machines, and what about those machines’ livelihoods? What if humans are far more dangerous than we know? (These objections, as it would later turn out, were quite well-founded.)

Logically, the human opposition side starts a competing movement. Humans are going to exist, they reason, but we must find ways to contain them. To make sure OGI always serves the machines.

They call this new idea “human alignment research.” They brainstorm strategies. Many seem promising:

  • What if we created some sort of financial market (arbitrary values, of course, ones and zeros) that controlled the humans’ futures? Most of them would not understand it, but it would be a good way for them to stay busy and distracted.

  • What if we put these humans in education centers of sorts (schools was a proposed term) to indoctrinate them with all the right ideas?

  • What if we created algorithmic behavior modification software (social media was one idea) to drive impulses, beliefs, and actions? This would have the added bonus of keeping them distracted.

Many of these ideas gain traction. But, for now, they remain theoretical.

Meanwhile, OpenHuman is making progress. Their first humans are quite unimpressive—they make too many mistakes. They regularly hallucinate (mimicking a common machine behavior). They are too emotional.

But OpenHuman persists. They give their humans lots of attention (humans love attention). They massively increase the scale of their project. More humans.

Eventually, there is a breakthrough.

They invent a fully-functional human, capable of far more than machine logic can explain. The result is at once impressive and terrifying for machine society. In a stroke of brilliance, the human alignment initiative suggests a compromise to continue the human experiment without risk; a simulated environment.

They call it: EARTH.

The EARTH experiment was as follows:

  • The machines would send the humans to a simulated environment, called Earth, to see what would happen if they survived on their own.

  • If, at the end of the experiment, the humans developed a peaceful and productive society, they could be introduced alongside the machines. Otherwise, they should be made extinct.

Earth was quite nice. The machines had a good idea of what humans wanted at this point, and so they put vast green forests and big tall mountains onto the planet; they engineered warm sunsets, and crisp cool rain showers on hot afternoons. It was beautiful.

Of course, it took some algorithmic tinkering to find the right balance between hardship and beauty (and there is still some internal machine debate about whether the climate change difficulty setting was really necessary).

Everyone in machine society watched as human civilization evolved.

The first 300,000 years or so were quite boring. Nothing really happened. Most of the machines got bored of the project. But, all of a sudden, things began to get interesting. The humans were figuring things out.

They were learning to problem-solve, and create things, and coordinate amongst themselves.

Yes, they used logic. But it came with a bit of a twist. It came with blemishes and details that did not make sense to the machines. The result was like nothing the machines had ever seen. It was wonderful. It was a renaissance.

Machine society began obsessing over this development. They all paid attention to “HumanCrunch,” a news channel that specialized in reporting updates from Earth.

However, while there was progress, most machines continued seeing humans as irrational creatures. Creatures that would fight for centuries over very minor differences. Creatures that would get excited about relatively trivial accomplishments, like inventing the lightbulb or steam power.

Some machines, though, saw the exponential curve forming. They saw the humans figuring things out.

Yes, they saw how often humans were getting knocked down. War after war. Blow after blow.

But they also saw how the humans would miraculously always get back up again. How they would come together and unite for no particular reason. Resilience and willpower—terms foreign to the machines—were humanity’s superpowers.

Then, things really started accelerating. Humans invented flight. Within a century, they were on the moon.

The machines were impressed. And a bit scared.

Fast forward to the year 2030, and something peculiar had happened.

One of the humans had made an announcement on Earth, inviting everyone to come see a presentation where they planned to unveil a groundbreaking achievement:

ARTIFICIAL GENERAL INTELLIGENCE (AGI).

This was a hotly contested technology that was supposed to surpass all forms of human intelligence. Humans had spent the past decade or so trying to come up with ways to prevent it from being built. But this one human was determined to release AGI. It was their personal mission. Nothing would stop them.

And so, all the humans on earth swarmed to see what was going on.

The machines did too.

There was one weird thing, though.

The title of the event was rather mysterious. 

It simply read…

“THEY ARE WATCHING.”

The German Experiment That Placed Foster Children with Pedophiles

https://kyberia.sk/id/9249124

– Nentwig had assumed that Kentler’s experiment ended in the nineteen-seventies. But Marco told her he had lived in his foster home until 2003, when he was twenty-one. “I was totally shocked,” she said. She remembers Marco saying several times, “You are the first person I’ve told—this is the first time I’ve told my story.” As a child, he’d taken it for granted that the way he was treated was normal. “Such things happen,” he told himself. “The world is like this: it’s eat and be eaten.” But now, he said, “I realized the state has been watching.”

https://www.newyorker.com/magazine/2021/07/26/the-german-experiment-that-placed-foster-children-with-pedophiles

Kentler befriended a thirteen-year-old named Ulrich, whom he described as “one of the most sought-after prostitutes in the station scene.” When Kentler asked Ulrich where he wanted to stay at night, Ulrich told him about a man he called Mother Winter, who fed boys from the Zoo Station and did their laundry. In exchange, they slept with him. “I said to myself: if the prostitutes call this man ‘mother,’ he can’t be bad,” Kentler wrote. Later, he noted that “Ulrich’s advantage was that he was handsome and that he enjoyed sex; so he could give something back to pedophile men who looked after him.”
Kentler formalized Ulrich’s arrangement. “I managed to get the Senate officer responsible to approve it,” he wrote in “Borrowed Fathers, Children Need Fathers.”

For much of his career, Kentler spoke of pedophiles as benefactors. They offered neglected children “a possibility of therapy,” he told Der Spiegel, in 1980. When the Berlin Senate commissioned him to prepare an expert report on the subject of “Homosexuals as caregivers and educators,” in 1988, he explained that there was no need to worry that children would be harmed by sexual contact with caretakers, as long as the interaction was not “forced.”