Journal & Stories | Tos Connect
×
Home Blog About Contact Shop

Tos Connect

Behind the Music & Production Notes
Dec 18, 2025 Production

The Anatomy of Exhaustion: The Real Story Behind "No Hard Work"

There is a specific kind of tired that sleep cannot fix.

It isn’t the physical exhaustion of running a marathon or lifting weights. It is a digital exhaustion—the static buzz of staring at screens, the endless scroll of other people’s highlight reels, and the pressure to be constantly "on." This was the headspace I was in when I wrote No Hard Work.

For a long time, I thought a "hard worker" was someone who was always busy. In the music industry, specifically here in the growing Cambodian scene, there is a hustle culture that tells you if you aren't releasing a track every week, you are invisible. I tried to keep up with that pace. I forced melodies when I didn't feel them. I spent hours mixing tracks that had no soul.

But one night, driving home through Phnom Penh at 2 AM, the realization hit me: I wasn't doing "hard work." I was just drowning in "hard feelings." That phrase stuck in my head like a loop. It became the seed for the entire album.

The Mistake That Became the Beat

Most producers will tell you that their best songs started with a chord progression or a catchy melody. No Hard Work started with a mistake.

I was in my home studio, messing around with a modular synth patch. I was trying to create a light, airy pad sound—something atmospheric and pretty. But I had routed the cables wrong. Instead of a soft pad, the synth spit out this heavy, distorted, rhythmic bass pulse. It sounded like machinery breaking down. It sounded like a headache.

My instinct was to delete it and fix the patch. But I stopped. That ugly, grinding sound was exactly how I felt.

I looped that four-bar bassline. I didn't add drums for two hours. I just let that heavy pulse run in the background while I sat in the chair, writing lyrics on my phone. That bassline is what you hear in the final version of the track. It is the heartbeat of the anxiety that drives the song.

Deconstructing the Lyrics

The chorus line, "No hard work, just hard feelings," is often misunderstood. Some listeners think it’s about being lazy. It is actually the opposite. It is about the paralysis of overthinking.

When you are emotionally drained, even simple tasks feel like climbing a mountain. You spend 90% of your energy fighting your own brain and only 10% actually doing the work. That is the tragedy of the song. You are trying so hard, but you have nothing to show for it but exhaustion.

I wrote the second verse, "Blue light silhouette / Watching the sun forget to rise," about my insomnia. We have all been there—lying in bed, bathed in the blue glow of a smartphone, doom-scrolling until dawn. You know you should sleep, but you can't disconnect. You are "Tos Connect"—connected to everything, but attached to nothing.

The "Anti-Production" Approach

For the vocal recording, I made a conscious decision to break the rules.

In modern pop and electronic music, vocals are usually heavily processed. We add reverb to make them sound big, delay to make them sound dreamy, and Auto-Tune to make them perfect. I wanted No Hard Work to sound uncomfortable.

I stripped away almost all the reverb from the lead vocal. This is a technique known as "dry vocal mixing." When you hear a dry vocal in headphones, it sounds like the singer is standing right next to your ear. It feels intimate, almost invasive. I wanted the listener to feel like I was whispering my anxieties directly to them, with no space to hide.

We used a standard condenser microphone, but I stood much closer to the pop filter than usual. This captured the tiny breaths, the mouth noises, the cracks in the voice. We kept those "imperfections" in. If I had polished the vocals too much, the song would have lost its honesty. You can't sing about being broken with a perfect, shiny voice.

Why Electronic Soul?

This track defines the genre I am trying to build: Electronic Soul.

"Electronic" is the method—the synths, the drum machines, the cold precision of the grid. "Soul" is the human element—the pain, the voice, the story. No Hard Work is the collision of these two worlds. It is a robot heart trying to feel human emotions.

I released this song not knowing if anyone would relate. I thought it might be too personal, too dark. But the response has been overwhelming. It turns out, I am not the only one who feels this way. We are a generation defined by burnout, trying to find a melody in the noise.

If you are listening to this track tonight, feeling that same digital exhaustion, just know: you aren't lazy. You're just feeling hard. And that’s okay.

Enjoying the music?

If these production notes helped you, consider fueling the next session.

☕ Buy me a Coffee
Dec 17, 2025 Identity

Why I Quit Pop Music to Create "Electronic Soul"

I used to write songs that were designed to be liked, not felt. In the early days of my career, I was obsessed with the "Formula." You know the one: 120 beats per minute, four chords that loop forever, a catchy hook within the first 15 seconds, and lyrics about generic love or partying.

And it worked. People tapped their feet. They nodded their heads. But the moment the song ended, they forgot it. I was creating background noise for other people's lives, and slowly, it was killing my love for music.

The Trap of "Content Creation"

We live in an era where musicians are pressured to be "Content Creators" first and artists second. The algorithm demands speed. It wants a new video every day, a new trend every week. To keep up, you have to simplify. You have to sand down the rough edges of your art until it is smooth, shiny, and completely safe.

I remember sitting in a studio session a few years ago. We were working on a track that was technically "perfect." The vocals were tuned to the cent. The drums were quantized to the millisecond. It sounded like a hit. But when I listened back on the drive home, I felt... nothing. It was like eating a rice cake—it fills you up, but it has no flavor.

That was the turning point. I realized I didn't want to make "Pop" music anymore. I wanted to make music that haunted you.

Defining "Electronic Soul"

I stopped releasing music for six months. I went back to my roots. I listened to the old soul records my parents used to play—Aretha Franklin, Sam Cooke, Marvin Gaye. The recording quality wasn't perfect. You could hear the tape hiss. You could hear the singer taking a breath. But the emotion was so heavy it felt like it had physical weight.

Then, I switched to my other obsession: French House and Detroit Techno. Artists like Daft Punk and Jeff Mills. I loved the cold, mathematical precision of the synthesizer. I loved how a drum machine could put you in a trance.

I asked myself: What happens if you crash these two worlds together?

That is how Tos Connect was born. "Electronic Soul" is the friction between the machine and the human. It is a cold, digital landscape occupied by a warm, beating heart.

The Production Shift

Changing genres meant changing my entire workflow. In Pop, you start with the hook. In Electronic Soul, I start with the "Atmosphere."

Now, before I write a single lyric, I spend hours designing a sound palette. I might take a recording of traffic in Phnom Penh and stretch it until it sounds like a choir. I might run a clean piano sound through a distortion pedal until it sounds broken. I am looking for sounds that have scars.

For example, on my track The Lie Was Worse, the bassline isn't a preset. It's a recording of a Moog synthesizer that I re-amped through a guitar amplifier in a tiled bathroom. It sounds messy. It sounds dangerous. A pop producer would have cleaned it up. I turned it up.

The Risk of Being Different

Leaving the safety of Pop music was terrifying. When you make weird, moody music, you lose the casual listeners. You lose the people who just want something to dance to at a club.

But the people who stay? They connect on a deeper level. They are the ones who listen with headphones in the dark. They are the ones who read the lyrics. They are the ones who feel that same digital exhaustion that I feel.

To me, that is what "Tos Connect" actually means. It isn't about connecting with everyone. It's about connecting deeply with the few who understand the language of the soul.

If you are an artist reading this, feeling stuck in the "Content Trap," I challenge you to stop chasing the algorithm. Make the song that scares you. Make the song that sounds like you.

Enjoying the music?

If these production notes helped you, consider fueling the next session.

☕ Buy me a Coffee
Dec 16, 2025 Gear & Tech

My 2025 Studio Setup: How I Produce on a Budget in Cambodia

There is a dangerous myth in the music industry that you need a million-dollar studio to make a hit record. We see videos of famous producers sitting behind massive mixing consoles that look like spaceships, surrounded by walls of vintage synthesizers. It is easy to feel discouraged. It is easy to think, "I can't make professional music because I don't have that gear."

I am here to tell you that is a lie. The "Tos Connect" sound—that atmospheric, detailed Electronic Soul—wasn't created in a soundproof bunker in Los Angeles. It was created in a spare bedroom in Phnom Penh, battling the sound of construction work and monsoon rain.

In 2025, the barrier to entry is lower than ever. If you have a laptop and a pair of ears, you are dangerous. Here is exactly what I use to create my tracks, proving that limitations are actually a superpower.

The Brain: The Computer & DAW

My entire musical universe lives inside a MacBook Pro. I don't use external DSP towers or fancy hardware accelerators. Modern processors are incredibly powerful. I can run 50 tracks of audio and heavy plugins without the system sweating.

For software, I swear by Ableton Live. I used to use other DAWs that were more linear, like a tape recorder. But Ableton feels like an instrument itself. The "Session View" allows me to loop ideas endlessly without committing to a structure. This is crucial for Electronic Soul. I often start with a drum loop and a synth pad, letting them roll for 20 minutes while I hum melodies over them. It keeps the workflow fluid and prevents "Writer's Block."

The Ears: Monitoring on a Budget

If you can't hear the truth, you can't mix the truth. For years, I mixed on cheap gaming headphones. The result? My mixes sounded great in my room but terrible in the car. The bass would disappear, or the hi-hats would pierce your ears.

I finally invested in a pair of Yamaha HS Series monitors. They are famous for being "brutally honest." They don't make the music sound good; they make it sound accurate. If a mix sounds good on these, it will sound good anywhere.

However, living in an apartment means I can't always blast speakers at 3 AM. That is where my open-back headphones come in. I use them to check the stereo width and the reverb tails. If you are a bedroom producer, good headphones are more valuable than expensive speakers because they remove the "room sound" from the equation.

The Voice: The Microphone Choice

For vocals, people assume you need a $3,000 Neumann microphone. I use a simple Audio-Technica condenser mic that costs less than a pair of sneakers.

Here is the secret: The microphone matters less than the performance and the placement. For tracks like No Hard Work, I used the "Proximity Effect." By standing very close to the mic (about 2 inches away), the capsule naturally boosts the low frequencies in my voice. This gives that warm, intimate "whisper in your ear" sound without needing heavy EQ processing.

I plug this into a Focusrite Scarlett interface. It is the industry standard for a reason. It is clean, it is durable, and it just works. You don't need a fancy tube preamp to get a radio-ready vocal in 2025.

The Space: Battling the Environment

The biggest challenge of producing in Cambodia isn't the gear; it's the noise. Phnom Penh is a loud, vibrant city. There are motorbikes, street vendors, and sudden thunderstorms.

I haven't spent thousands on professional soundproofing. Instead, I use "DIY" acoustic treatment. I built my own absorption panels using heavy towels and wooden frames. I have a thick rug on the floor to stop reflections. When I record vocals, I often do it inside a closet full of clothes. Clothes are excellent sound absorbers—they stop the vocal from bouncing off the walls and sounding "boxy."

Sometimes, the environment actually makes it onto the record. In The Lie Was Worse, there is a faint high-pitched texture in the background. That isn't a synthesizer—it's the sound of insects chirping outside my window, pitched down two octaves. Instead of fighting my environment, I sample it.

Conclusion: It's Not About the Gear

If I had waited until I could afford a "Pro Studio" to start releasing music, nobody would have ever heard of Tos Connect. I would still be waiting.

Don't let Gear Acquisition Syndrome (GAS) stop you from creating. A hit song is 10% sound quality and 90% emotion. Learn to use the tools you have until they break. Master your stock plugins before you buy expensive ones. The magic is in your head, not in your hardware.

Enjoying the music?

If these production notes helped you, consider fueling the next session.

☕ Buy me a Coffee
Dec 15, 2025 Song Breakdown

The Lie Was Worse: Deconstructing the Sound of Betrayal

There is a specific moment in the timeline of a betrayal that hurts more than any other. It isn't the moment the bad thing happened. It's the moment you realize the lengths someone went to cover it up. The original act is a sharp pain, but the lie is a slow-acting poison. It rewrites history. It makes you question your own reality.

The Lie Was Worse is my attempt to translate that very specific, sickening emotional freefall into audio. It is perhaps the darkest song on the new EP, and easily the hardest one to write. Unlike tracks that start with a cool beat or a synth patch, this one started with a knot in my stomach.

This is a breakdown of how I built a soundscape for deception, moving from the analog skeleton to the digital flesh of the production.

Phase 1: The Analog Skeleton (The Truth)

For a song about fake realities, I needed to start with something undeniably real. Every "Electronic Soul" track I write begins on an acoustic instrument. If a song can't survive being played on just a piano or a guitar, it won't survive being produced with fifty layers of synthesizers.

I sat at my upright piano in the dark. I wasn't looking for a melody; I was looking for a chord progression that felt "nauseous." I landed on a sequence that leans heavily on minor 9th chords and suspended resolutions. In music theory, these chords want to resolve to a "home" key, but I kept denying them that resolution. The progression feels unstable, like walking on floorboards that you know are rotten underneath.

The demo was just me, the piano, and the sound of the piano pedals squeaking. I kept those squeaks in the final recording. They are human imperfections. They represent the raw truth before the digital manipulation begins.

Phase 2: The Digital Deception (The Lie)

Once the analog foundation was laid, it was time to introduce the "Electronic" elements. In the context of this song, electronics represent the lie itself—the artificial construct designed to hide the truth.

I used a synth pad that sounds beautiful on the surface—a shimmering, airy choir sound. But if you listen closely on headphones, you'll hear it's corrupted. I used a "bit-crusher" effect to degrade the audio quality slightly, introducing digital artifacts and crackles. It’s a sonic metaphor: something that looks perfect from a distance but is crumbling when you get close.

The drums were programmed to feel inhuman. In Soul music, you usually want drums to swing, to feel loose and groovy. Here, I quantized everything perfectly to the grid. The hi-hats are rigid. The kick drum is a cold, dead thud. There is no groove, only a relentless, robotic pulse. This is the sound of someone rigidly sticking to a rehearsed story, fearing that any deviation will reveal the truth.

Phase 3: The Glitch Architecture

To take the theme of "broken reality" further, I employed what I call "Glitch Architecture." Throughout the track, the audio momentarily stutters, skips, or drops out entirely for a millisecond.

I achieved this by manually chopping up small sections of the master audio file and deleting tiny slivers, or repeating a fraction of a second rapidly. It’s disorienting for the listener. Just as you get settled into the groove, the floor disappears from under you. It mimics that mental short-circuit that happens when you catch someone in a lie, and your brain struggles to process the contradiction.

Phase 4: The Vocal Delivery

Recording vocals for this track was exhausting. My natural instinct when singing about anger is to push hard, to belt it out. But this song isn't about explosive anger; it's about devastation. It’s the quiet conversation at 3 AM when everything has already fallen apart.

I sang the verses almost in a monotone, barely above a whisper. I wanted the delivery to sound drained, as if the singer had no energy left to fight. This restraint makes the lyrics hit harder. When you whisper something devastating, it sounds more true than screaming it.

In the mix, the vocal is extremely dry (no reverb) and compressed very heavily. It sits right in the front of the mix, uncomfortably close to the listener. There is nowhere for the vocal to hide, just as there is nowhere left for the liar to hide.

Conclusion: Finding Beauty in the ugly

When I finished mixing The Lie Was Worse, I didn't want to listen to it again for a week. It’s a heavy track. It’s not something you put on at a party. But it is one of my proudest productions because it is honest.

Electronic Soul isn't always about smooth vibes and chill beats. Sometimes, it's about using technology to amplify the ugliest parts of the human experience so we can look at them, understand them, and eventually, move past them.

Enjoying the music?

If these production notes helped you, consider fueling the next session.

☕ Buy me a Coffee
Dec 14, 2025 Philosophy

Digital Loneliness: How Technology Influences My Sound Design

The name of this project, Tos Connect, is a bit of a double entendre. In Khmer, "Tos" roughly translates to "Let's go." It is an invitation. "Connect" implies unity, joining together, or logging on. But if you listen closely to the music, you will hear that it isn't really about connection at all. It is about the failure to connect.

We are living in the loneliest generation in human history, despite being the most "connected." We carry the entire world in our pockets. We can video chat with someone in New York while sitting in a cafe in Phnom Penh. Yet, the quality of our interactions has degraded. We trade eye contact for likes. We trade conversation for comments.

This paradox—the feeling of being alone in a crowded digital room—is the foundational concept behind my sound design. When I sit down to produce a track, I am not just trying to make a cool beat. I am trying to answer a question: What does digital loneliness sound like?

The Sound of Anxiety: High Frequencies

If you analyze the soundscape of modern life, it is dominated by high-frequency noise. The whine of a computer fan, the buzz of a fluorescent light, the subtle "ding" of a notification. These sounds trigger a subconscious anxiety in us.

In my production, I often use high-pitched sine waves or filtered white noise to mimic this "tech anxiety." For example, in the bridge of We're Just Wallpaper, there is a very faint, high-pitched tone that slowly rises in volume. It isn't loud enough to hurt, but it is annoying enough to make you feel tense. It mimics the feeling of tinnitus you get after staring at a screen for eight hours straight.

I also use "clicks" and "pops" as percussion elements. I sample the sound of a mouse clicking, a keyboard tapping, or a phone unlocking. When you sequence these sounds into a rhythm, they become hypnotic but cold. They remind the listener that the machine is always present.

The Void: Reverb as a Metaphor

Reverb is the effect we use to put a sound in a "space." In Pop music, reverb is used to make things sound epic and expensive. In Electronic Soul, I use reverb to represent distance.

I often set up what I call "The Void Reverb." It is a reverb plugin set to a decay time of 10 seconds or more, with all the high frequencies cut out. It sounds like you are standing in a massive, dark, empty warehouse. I will take a warm, human sound—like a piano chord or a hum—and send it into this massive reverb. The sound gets swallowed by the space.

This represents the experience of posting something vulnerable online. You pour your heart out into a status update or a video, and you send it out into the massive void of the internet. Sometimes it echoes back, but often, it just disappears into the digital silence.

Phantom Vibrations: The Low End

Have you ever felt your phone vibrate in your pocket, reached for it, and realized it never rang? That is called "Phantom Vibration Syndrome." It is a physical manifestation of our psychological addiction to connectivity.

I try to recreate this feeling using sub-bass. In many of my tracks, there are pulses of sub-bass (frequencies below 60Hz) that you feel more than you hear. They hit your chest unexpectedly. They aren't always synced perfectly to the kick drum. They are little "ghost pulses" that keep you on edge. It grounds the music in a physical sensation, reminding you that even though the digital world isn't real, the physical reaction to it is.

The Human Error in the Grid

The ultimate tragedy of technology is that it demands perfection, but humans are inherently flawed. Social media demands the perfect photo. Autocorrect fixes our spelling. Auto-Tune fixes our pitch.

To combat this, I leave "human errors" in my final masters. If my voice cracks slightly on an emotional high note, I keep it. If I rush the timing on a piano riff because I was excited, I don't quantize it. These mistakes are the proof of life. They are the "Soul" in Electronic Soul.

I vividly remember recording the synth solo for The Neutral Zone. I played a wrong note, a dissonant sharp 4th, and immediately stopped to re-record. But when I listened back, that wrong note was the most interesting part of the melody. It sounded like a glitch, a moment of confusion. I kept it. In a world of curated perfection, ugliness is the only thing that feels true.

Conclusion: The Invitation

Ultimately, technology is a tool. It isn't good or bad; it is just a mirror. It reflects our own insecurities back at us at the speed of light.

My music isn't a protest against technology. I love synthesizers. I love the internet. But Tos Connect is a reminder that we need to find the ghost in the machine. We need to acknowledge the sadness that comes with the screen.

So, the next time you listen to my tracks, don't just listen to the melody. Listen to the static. Listen to the empty space between the notes. That is where I am. That is where we all are.

Enjoying the music?

If these production notes helped you, consider fueling the next session.

☕ Buy me a Coffee
Dec 13, 2025 Music Theory

The Psychology of the Loop: Why Repetition is the Language of the Soul

One of the most common criticisms labeled against electronic music by traditionalists is that it is "too repetitive." They hear a four-bar loop playing over and over for six minutes and they call it lazy. They ask, "Where is the bridge? Where is the key change? Why doesn't it go anywhere?"

But they are missing the point. In Electronic Soul, the repetition isn't a lack of ideas. It is the destination itself. The loop is not just a production tool; it is a psychological device designed to alter your state of consciousness.

When I write for Tos Connect, I am obsessed with the concept of the "Infinite Loop." Whether it is the bassline in No Hard Work or the synth arpeggio in The Neutral Zone, I use repetition to bypass the listener's logical brain and speak directly to their subconscious. Here is the science and the soul behind why we loop.

The "Trance" State and Brainwaves

Human beings have used repetition to access higher states of consciousness for thousands of years. Think of the rhythmic drumming in shamanic rituals, the chanting of mantras in Buddhism, or the repetitive structure of Gospel spirituals. When the brain hears a rhythmic pattern repeated without change, it eventually stops trying to "predict" the future. It relaxes.

In music psychology, this is often linked to the shift from Beta brainwaves (active, alert, anxious) to Alpha and Theta brainwaves (meditative, dreamlike, creative). When you listen to a pop song with constant changes, your brain is constantly processing new information. It keeps you alert. But when you listen to a deep electronic loop, your brain eventually "surrenders" to the groove.

This is the goal of my production. I want to induce a mild trance state. I want the listener to stop analyzing the lyrics for a moment and just exist inside the sound. That is where the "Soul" comes in. You can't feel your soul if your brain is too busy analyzing the chord progression.

The Machine vs. The Mantra

There is a beautiful tension in Electronic Soul between the machine and the mantra. A drum machine (like the Roland TR-808 sounds I often use) creates a loop that is mathematically perfect. It never gets tired. It never rushes. It never drags. It is eternal.

When you layer a human element over that—like a vocal phrase repeated over and over—it changes the meaning of the words. This is a technique called "Semantic Satiation." Usually, if you say a word too many times, it loses its meaning. But in music, if you sing a phrase too many times, it gains emotional weight.

Take the end of my track I Was The Storm. The vocal line repeats, "I didn't mean to rain, I didn't mean to rain." The first time you hear it, it's an apology. The tenth time, it's a realization. By the fiftieth time, it becomes a texture, a wash of regret that surrounds you. The loop allows the emotion to sink deeper than a linear narrative ever could.

The Architecture of a "Living Loop"

However, there is a danger. If a loop is truly static—meaning exactly the same digital information repeated—the brain eventually gets bored and tunes it out. This is called "Habituation." To prevent this, I use a technique I call the "Living Loop."

In my DAW (Ableton Live), I never just copy-paste a clip for four minutes. I set up automation lanes that are constantly moving, but very slowly. I might have a filter on the synthesizer that opens up by 1% over the course of 32 bars. I might have the reverb decay time increase slightly every time the snare hits.

The listener doesn't consciously notice these changes. They feel like the loop is the same. But subconsciously, their brain registers that the sound is evolving. It feels "organic," like a breathing organism. This is how you keep a listener engaged with a repetitive track for five minutes. You have to make the static feel like it is moving.

Polyrhythms: The Friction of Life

Another way to keep loops interesting is through polyrhythms. This is where you have two loops of different lengths playing against each other. For example, a drum beat that loops every 4 beats, playing against a synth melody that loops every 3 beats.

They will line up perfectly at the start, but then they drift apart, creating friction and tension. Eventually, after 12 beats, they align again. This creates a sense of "resolution" without changing the notes.

I use this to represent the chaos of modern life. We have our internal rhythm (our heartbeat, our breath), and then we have the external rhythm (the clock, the notifications, the traffic). They rarely line up. Tos Connect is about exploring that friction. When the rhythms finally align in the track, it provides a moment of catharsis—a brief moment where the internal and external worlds match up.

Conclusion: The Comfort of Consistency

In a world that is chaotic and unpredictable, repetition offers comfort. We doom-scroll because we want the dopamine loop. We re-watch the same TV shows because we know how they end. We listen to looped music because it offers a sense of stability.

When I produce, I am trying to build a safe space inside the loop. A place where, for three minutes and thirty seconds, you know exactly what is going to happen next. In that predictability, you are free to feel whatever you need to feel. You can cry, you can dance, or you can just stare at the ceiling. The loop will catch you.

So, is Electronic Soul repetitive? Yes. That is exactly the point.

Enjoying the music?

If these production notes helped you, consider fueling the next session.

☕ Buy me a Coffee
Dec 12, 2025 Industry & AI

The Algorithm vs. The Artist: Finding Soul in the Age of AI

You cannot scroll through a social media feed in 2025 without seeing it: "This song was written by AI." "This voice was generated by a computer." We are living in a moment that feels like science fiction. For musicians, it is terrifying. For listeners, it is confusing.

As an artist whose genre is literally called "Electronic Soul," I exist at the intersection of these two warring factions. I use machines to make music every day. My drummer is often a computer. My piano is often a synthesizer. But there is a line in the sand that I believe we are crossing, and it forces us to ask the most important question of our generation: If a machine can make perfect music, what is the point of the human musician?

Here is why I am not afraid of the robot, and why I believe the rise of AI will actually spark a massive return to raw, imperfect, human emotion.

The "Content" Treadmill

The biggest threat to music right now isn't AI itself; it is the "Content Economy." Platforms like TikTok and Spotify prioritize quantity over quality. The algorithm demands that artists feed it constantly. It treats a song the same way it treats a 15-second video of a cat falling off a chair. It is just "content" to keep you scrolling.

AI is the ultimate tool for this treadmill. It can generate 1,000 lo-fi beats in a minute. It can write generic pop lyrics faster than I can tune my guitar. If music is just "background noise" for your life, then yes, humans have lost. We cannot compete with the speed of a processor.

But Tos Connect was never about background noise. It is about the foreground. It is about the moment you stop scrolling and start listening.

The Value of Imperfection

Here is the secret that AI engineers don't understand: Music isn't good because it is perfect. Music is good because it is flawed.

When you listen to a classic Stevie Wonder record, the tempo fluctuates. He speeds up when he gets excited. He slows down when he gets sad. His voice cracks when he hits a high note that is just out of his range. Those "mistakes" are what make you cry. They communicate a biological reality: I am a human, I am struggling, and I am pushing against my limits.

AI doesn't have limits. It can hit any note. It can play any rhythm perfectly on the grid. And because it has no struggle, it has no tension. It is uncanny. It is like looking at a face that is too symmetrical. Your brain knows something is wrong.

In my track No Hard Work, there is a moment in the second verse where my timing is slightly late on the snare drum. I debated fixing it in Ableton. I could have dragged the audio file 10 milliseconds to the left and made it "perfect." But I left it. That tiny delay creates a feeling of laziness, of dragging your feet, which perfectly matches the lyrics about exhaustion. An AI would have "fixed" that. And it would have ruined the song.

Context is Everything

We don't just listen to sound waves; we listen to stories. When we hear a breakup song, we connect with it because we know the singer actually had their heart broken. We empathize with their pain. The context of the creator's life adds weight to the art.

If an AI generates a breakup song, the lyrics might be "correct." They might rhyme "tears" with "fears." But you know that the machine has never felt heartbreak. It has never waited by the phone. It has never cried in a bathroom at a party. Without that context, the words are hollow. It is a simulation of emotion, not the transmission of emotion.

This is why I share so much of my process in this journal. I want you to know that when I sing about loneliness in The Neutral Zone, it isn't a prompt I typed into a chat bot. It is a real night I spent staring at the ceiling in Phnom Penh.

AI as a Tool, Not a Savior

Am I anti-technology? Absolutely not. I use AI tools in my workflow. I use algorithms to help separate audio stems (isolating vocals from drums). I use randomizers to generate synth patches.

But I treat AI like a "Random Number Generator," not a "Collaborator." I use it to throw paint at the wall, but I decide what stays and what goes. The curation is the art. The choice is the art.

In the future, I believe we will see a split in the music industry. There will be "Functional Music"—AI-generated sleep sounds, workout beats, and elevator music. That market will belong to the machines.

But on the other side, there will be "Human Music." And I believe it will become more raw, more acoustic, and more "live" than ever before. We will crave the sound of a finger sliding on a guitar string. We will crave the sound of a room. We will pay a premium for the proof of humanity.

Conclusion: The Ghost in the Machine

We are entering the "Uncanny Valley" of culture. As the internet floods with fake images and fake voices, reality becomes a luxury product.

My promise to you, as a listener of Tos Connect, is that there will always be a ghost in the machine. There will always be a human hand turning the knob. There will always be a real heart breaking behind the microphone.

Let the algorithms optimize for engagement. I will optimize for the soul.

Enjoying the music?

If these production notes helped you, consider fueling the next session.

☕ Buy me a Coffee
Dec 11, 2025 Sound Design

The Art of Sample Hunting: Finding Music in the Noise of Phnom Penh

If you have ever visited Phnom Penh, you know that silence is a luxury. The city is a chaotic symphony of motorbike engines, construction drills, street vendors chanting through megaphones, and the sudden, overwhelming roar of a monsoon downpour.

For most people, this is just noise. It is something to block out with noise-canceling headphones. But for me, as a producer of Electronic Soul, this city is my biggest instrument. It is an endless library of textures waiting to be recorded, chopped, and turned into music.

This is the story of how I turn the chaos of Cambodia into the calm of Tos Connect, and why I believe "Sample Hunting" is the cure for generic production.

The Problem with Splice

We live in the golden age of convenience. For $10 a month, any producer can subscribe to Splice or Loopcloud and download millions of perfectly recorded drum loops and synth shots. This is an amazing tool, but it creates a problem: Homogeneity.

If I download a "Lofi Snare" sample, thousands of other producers have downloaded that exact same file. We are all painting with the same colors. The music starts to sound indistinguishable. To stand out, you need to create sounds that nobody else owns.

That is why I started field recording. When I record the sound of a metal gate slamming shut in my alleyway, I own that sound. It has a specific reverb tail that only exists in my neighborhood. It is a sonic fingerprint that cannot be pirated.

The Gear: Capturing the World

You don't need expensive equipment to start sample hunting. While I occasionally use a Zoom H4n Pro recorder for stereo imaging, 80% of my field recordings are captured on my iPhone using the Voice Memos app.

The "low fidelity" nature of a phone microphone is actually a stylistic choice. Phone mics are designed to capture mid-range frequencies (the human voice) and crush the dynamic range. This creates a gritty, compressed texture that sits perfectly in a Lo-Fi or Electronic mix. It sounds "real" because it sounds like the device we use to experience the world every day.

Case Study 1: The Central Market Drone

One of my favorite hidden sounds in Phnom Penh is the ambient drone inside Phsar Thmei (Central Market). The market creates a massive Art Deco dome. The acoustics inside are bizarre—sounds reflect and bounce for seconds, creating a natural wash of reverb.

I stood in the center of the dome and recorded five minutes of crowd noise. In the studio, I didn't use this as a background effect. I loaded the audio into a sampler in Ableton Live and found a tiny ½ second loop of a woman’s voice echoing. I pitched it down an octave and drowned it in reverb.

The result was a haunting, ghostly choir pad. It serves as the harmonic bed for my unreleased track, Ghosts of the Market. It sounds like a synthesizer, but it has the organic fluctuation of human life inside it.

Case Study 2: The Tuk-Tuk Bassline

The Cambodian Tuk-Tuk (the classic moto-remorque) has a very distinctive engine sound. It is a low, rhythmic chugging: thug-thug-thug-thug. It is essentially a low-frequency oscillator (LFO) in the real world.

I recorded an idling Tuk-Tuk engine from about one foot away. Back in the DAW, I used an EQ to cut out all the high frequencies (the rattling metal) and isolated just the low-end thud. I then put a "Gate" effect on it, triggered by my kick drum.

Now, every time the kick drum hits, the Tuk-Tuk engine rumbles underneath it. It adds a mechanical, gritty texture to the bass that a clean sine wave simply cannot replicate. It gives the track a sense of motion, literally and figuratively.

The Technique: Granular Synthesis

The secret weapon for turning noise into music is Granular Synthesis. This is a technique where software takes an audio file and breaks it into thousands of tiny "grains" (milliseconds of sound). It then plays them back in a cloud.

I often take "ugly" sounds—like the screech of tires or the buzz of a neon light—and run them through a granular synth (like Granulator II in Ableton). The software blurs the harsh transients until they become a smooth, silky texture. It’s like taking a photo of a garbage dump and blurring it until it just looks like abstract colors.

In The Lie Was Worse, the "shimmer" sound in the chorus isn't a synth. It is a granular cloud made from a recording of rain hitting a tin roof. By freezing the sound of the rain droplets, I turned a percussive sound into a tonal instrument.

Respecting the Source

There is an ethical component to field recording. You are taking a piece of the world and re-contextualizing it. I am always careful not to record private conversations or exploit people's privacy. I am looking for the sound of the space, not the specific words of a stranger.

But when you do it right, it adds a layer of subconscious geography to the music. Even if the listener has never been to Cambodia, they can feel the humidity in the recording. They can feel the density of the air. The "Silence" in my tracks isn't digital black (absolute silence). It is the "Room Tone" of my life here. It creates a sense of place.

Conclusion: The World is Your Studio

My advice to any producer feeling uninspired is to close the laptop and open the window. Or better yet, go for a walk. The world is screaming melodies at you.

That construction site down the street? That's an industrial drum kit. That bird chirping at 5 AM? That's a lead melody. That hum of your refrigerator? That's a drone note in the key of B-flat.

Stop searching for the perfect preset pack. The most unique sounds on earth are free, they are infinite, and they are happening right outside your door. You just have to press record.

Enjoying the music?

If these production notes helped you, consider fueling the next session.

☕ Buy me a Coffee
Dec 10, 2025 Music Business

Surviving as an Indie Artist: The Economics of Electronic Soul in 2025

The romantic image of the "starving artist" is dead. In 2025, if you are starving, you aren't noble; you are just bad at business. That might sound harsh, but it is the reality of the modern music industry. We are no longer waiting for a record label executive in a suit to swoop down and "discover" us. We are the label. We are the PR team. We are the CEO.

As an independent artist based in Southeast Asia, building Tos Connect hasn't just been a creative journey; it has been an entrepreneurial one. I often get asked how I manage to sustain a career in a niche genre like Electronic Soul without major label backing. The answer isn't magic. It's math, data, and a relentless refusal to compromise on ownership.

Here is a transparent look at the economics of being an indie artist today, and why I believe there has never been a better time to be independent.

The Geography Advantage

For decades, artists believed they had to move to Los Angeles, London, or New York to "make it." That path is now a trap. The cost of living in those cities is so high that you spend 60 hours a week working a job you hate just to pay rent, leaving you zero energy to make music.

Building my career from Phnom Penh has been my biggest strategic advantage. The cost of living here allows me to keep my "burn rate" low. I don't need to generate $5,000 a month just to survive. This financial freedom buys me the most valuable asset for any artist: Time.

I can spend three days tweaking a snare sound for The Lie Was Worse because I'm not panicked about making rent tomorrow. In the digital economy, your location matters less than your connection. My WiFi in Cambodia uploads to the same Spotify servers as a studio in Hollywood.

Data Over Ego: The Spotify Strategy

Many artists ignore their analytics because they think it kills the "vibe." I look at my Spotify for Artists dashboard every morning. It isn't about obsession; it's about understanding the audience.

When I released No Hard Work, the data showed a spike in listeners from Santiago, Chile. I have never been to Chile. I don't speak Spanish. But the "Electronic Soul" sound resonates with the Chillwave/Lofi culture there. Seeing that data allowed me to pivot. I started running small Instagram ads targeting Santiago. I connected with listeners there.

If I were signed to a major label, they would likely ignore those 500 listeners in Chile because they are chasing the millions in the US. But as an indie, those 500 people are my tribe. I can nurture that relationship directly.

The Myth of the "Viral Hit"

Everyone wants to go viral on TikTok. It is the lottery ticket of 2025. But building a career on a viral moment is like building a house on sand. The moment the trend changes, your house collapses.

My strategy is the "Slow Burn." I am not trying to get one million streams in a week. I am trying to get 1,000 listeners who will listen to my songs for the next ten years. This is the "1,000 True Fans" theory. If you have 1,000 fans who will buy your merch, come to your shows, and stream your albums, you have a sustainable middle-class living.

This is why I write these blog posts. A viral video lasts 15 seconds. A deep connection with a fan who reads about your life lasts a lifetime. AdSense and blog content are actually part of my music ecosystem—they allow people to enter my world through words, even before they hear the notes.

Diversifying the Revenue Stream

Relying solely on streaming royalties is financial suicide. Spotify pays roughly $0.003 per stream. You need millions of streams to buy a sandwich. To survive, you must diversify.

1. Sync Licensing: This is where the real money is for electronic producers. "Sync" means getting your music into TV shows, commercials, or video games. My tracks are designed with this in mind—atmospheric, moody, and instrumental-heavy sections. A single placement in a Netflix show pays more than 500,000 streams.

2. Sample Packs: Remember Article 8 about field recording? I package those sounds. I sell packs of "Phnom Penh Ambience" or "Electronic Soul Drums" to other producers on platforms like Gumroad. I am selling the pickaxes to the other gold miners.

3. Merchandise: But not just t-shirts with a logo. I am working on limited runs of physical items that relate to the music—cassette tapes, lyric books, high-quality prints. Physical objects have value in a digital world.

The Cost of Independence

I don't want to paint a fake picture. Being independent is exhausting. You are the HR department, the copyright lawyer, the graphic designer, and the janitor. You spend 50% of your time making spreadsheets and answering emails.

There are days I wish I had a manager to handle the booking. There are days I wish I had a label to pay for a big marketing campaign. But then I look at my contract: I own 100% of my masters.

In the music industry, your "Masters" (the actual recording files) are your real estate. When a label signs you, they usually take ownership of the masters forever. You become a tenant in your own house. By staying independent, I own my house. If one of my songs blows up in 20 years, I get the check, not a corporation.

Conclusion: Bet on Yourself

The gatekeepers are gone. You can distribute your music to the entire planet for $20 a year. You can market to your exact niche for $5 a day. You can record a symphony on a laptop.

The only thing stopping you is the belief that you need permission. You don't. You just need to work. Not "Hard Work" in the sense of burnout, but consistent, smart, strategic work.

Tos Connect is more than just a band name; it is a mindset. Connect the dots. Connect the revenue streams. Connect with the people who actually care. The industry doesn't owe you a career. You have to build it, brick by brick.

Enjoying the music?

If these production notes helped you, consider fueling the next session.

☕ Buy me a Coffee
Dec 09, 2025 Vision & Community

The Vision for 2026: Evolving the Sound and Building the Tribe

We have covered a lot of ground in this journal. We’ve deconstructed lyrics, analyzed production techniques, explored the philosophy of digital loneliness, and broken down the economics of being an indie artist. If you have read this far, you understand that Tos Connect is more than just a Spotify profile; it’s a living project dedicated to exploring the intersection of humanity and technology.

But an artist cannot stand still. The moment you define your sound too rigidly is the moment it begins to die. "Electronic Soul" was the starting point, but it is not the destination.

As I look toward the end of 2025 and into 2026, the vision for this project is shifting. It’s time to move beyond just releasing singles and start building a world. Here is the blueprint for the next phase of the Tos Connect sound, the challenge of live performance, and the necessity of building a real community.

Phase 1: The Hardware Evolution

Up until now, 90% of my music has been made "in the box" (inside a laptop). It was necessary for speed and budget. But software, for all its convenience, lacks chaos. A digital synthesizer will play the exact same sound every time you press a key. It is safe.

The next evolution of my sound involves introducing danger back into the process. I am beginning to invest in analog hardware—specifically modular synthesis. Modular synths are unpredictable beasts. You plug cables in, and sometimes magic happens; sometimes just noise happens. They drift out of tune. They react to the temperature of the room.

You will hear this influence in upcoming releases. The basslines will feel thicker, grittier, and less "perfect." The goal is to make electronic music that feels like it was dug out of the ground, not generated by a chip. It’s about finding the soul in the electricity itself, not just the vocals planted on top of it.

Phase 2: Solving the "Live Problem"

The biggest challenge for any electronic producer is the live show. How do you take a song that was painstakingly constructed layer by layer in a studio and perform it on stage without just pressing the spacebar on a laptop and waving your arms?

I have seen too many electronic "concerts" that are just glorified DJ sets. That is not what I want for Tos Connect. The music is too personal for that. The vision for the live show is a hybrid experience—part electronic manipulation, part raw musicianship.

I am currently designing a live rig centered around the Ableton Push 3 and live instrumentation. The goal is to deconstruct my tracks into their core elements—drums, bass, chords, vocals—and trigger them live, while leaving space for improvisation. I want the audience to see the sweat. I want there to be a real risk that I might mess up a loop or hit a wrong note. That vulnerability is essential to the "Soul" aspect of the genre. A live show should feel like a tightrope walk, not a movie screening.

Phase 3: Building the Tribe (Beyond the Stream)

In Article 9, I talked about the "1,000 True Fans." But a fan is passive; a community member is active. The ultimate goal is to transition Tos Connect from a monologue (me releasing music to you) into a dialogue.

Streaming numbers are vanity metrics. They don't tell you who is actually connecting with the art. The future of this project relies on moving our interactions off platforms we don't control (like TikTok and Spotify) and onto owned spaces.

In late 2025, I plan to launch the **Tos Connect Discord Server**. This won't just be a place for me to post announcements. It will be a hub for producers to share feedback on their own tracks, for writers to discuss lyrics, and for listeners to dissect the themes of digital isolation we talk about here.

I also plan to release the "stems" (the isolated audio tracks) of my releases to this community. I want to hear how you remix No Hard Work. I want to see what a hip-hop producer in Atlanta or a techno DJ in Berlin does with my vocals. True connection happens when art becomes a collaboration.

The Final Manifesto

We are drowning in content. The world doesn't need another generic pop song. It doesn't need another perfectly polished Instagram influencer.

What we need—what I need—is something that feels real in a synthetic world. Music that acknowledges the anxiety of modern life without succumbing to nihilism. Music that uses the tools of the future to express the ancient emotions of the heart.

If you have been following this journey, thank you. Whether you are a fellow producer, a casual listener, or someone just looking for a soundtrack to a late-night drive, you are part of this. The groundwork is laid. The studio is built. The philosophy is defined.

Now, it’s time to turn up the volume. Tos.

Enjoying the music?

If these production notes helped you, consider fueling the next session.

☕ Buy me a Coffee
Dec 08, 2025 Sound Design & Psychology

Designing Nostalgia: Why Electronic Soul Sounds Like a Memory

Have you ever listened to a song and felt a sudden pang of sadness for a time you never actually lived through? You might be sitting in a coffee shop in 2025, but the music transports you to a rainy night in Tokyo in 1985, or a sun-faded road trip in 1970. This phenomenon is called Anemoia—nostalgia for a time you’ve never known.

In the world of Tos Connect, this feeling isn't an accident. It is engineered. One of the core pillars of "Electronic Soul" is the deliberate degradation of audio. We use the most advanced computers in history to simulate the sound of broken, cheap, and decaying technology.

Why do we do this? Why do we crave the sound of static, hiss, and wobble in an age of perfect 4K clarity? This is a deep dive into the "Architecture of Nostalgia" and how I use audio imperfections to hack your memory.

The Texture of Time: Noise Floor

Digital silence is terrifying. If you open a modern DAW (Digital Audio Workstation) and stop playing, the silence is absolute. It is a mathematical zero. In the real world, "absolute silence" doesn't exist. Even in a quiet room, there is air pressure, the hum of electricity, the blood rushing in your ears.

To make my electronic tracks feel "human," I have to paint over that digital black canvas. I layer what producers call a "Noise Floor" into every track. In The Lie Was Worse, before the music even starts, you hear a faint hiss. That is a sample of a cassette tape running empty.

That hiss acts as a subconscious glue. It fills the empty spaces between the notes. It tells your brain: "This is real. This exists in a physical space." It warms up the cold digital synthesizers and tricks the listener into dropping their guard. It feels like finding an old shoebox of photos in the attic.

The Warped Reality: Wow and Flutter

Old media formats—vinyl records, cassette tapes, VHS—were physically unstable. A record might be slightly warped, causing the pitch to drift up and down. A tape player motor might struggle to keep a constant speed, causing a fast vibration in pitch.

In technical terms, we call these "Wow" (slow pitch drift) and "Flutter" (fast pitch jitter). In the 1980s, engineers spent millions trying to eliminate these "problems." Today, I spend hundreds of dollars on plugins (like XLN Audio's RC-20 or iZotope Vinyl) to put them back in.

Why? because pitch instability feels like a fading memory. When you try to recall a face from ten years ago, the image is blurry. It shifts. By applying "Wow" to my synth pads, I make the chords drift slightly out of tune. It creates a sensation of fragility. It sounds like the music is struggling to survive, like a memory that is slowly being forgotten. That fragility makes the music feel precious.

The "Underwater" Effect: Low Pass Filtering

One of the most common techniques in modern production (popularized by artists like Drake or Tame Impala) is the aggressive use of Low Pass Filters. This cuts off the high frequencies of a sound, making it sound muffled and dark.

Psychologically, this mimics the effect of hearing music "from another room." Imagine you are at a party, but you stepped outside onto the balcony. You can still hear the music, but the walls dampen the crispness. You feel separated from the action.

I use this to invoke isolation. In the verses of No Hard Work, the main keys are heavily filtered. It puts the listener in the position of an observer, looking in from the outside. Then, when the chorus hits, I open the filter. The brightness returns. It creates a rush of clarity—a moment of stepping back into the room.

Hauntology: The Future That Never Happened

There is a philosophical concept in music called "Hauntology." It suggests that our culture is haunted by the "lost futures" of the past. We keep recycling the aesthetics of the 80s and 90s because we are afraid of the future.

While some see this as a trap, I see it as a language. By using the sounds of the past—the Yamaha DX7 electric piano, the LinnDrum drum machine—I am speaking a language that the audience already understands emotionally. But I am changing the grammar.

I take these retro sounds and sequence them in modern, glitchy, impossible ways. It creates a sense of "Uncanny Valley." It sounds familiar, but something is wrong. It isn't a retro throwback track; it is a ghost story told by a computer. That tension is the sweet spot of Electronic Soul.

Case Study: "The Anchor and the Kite"

In my track The Anchor and the Kite, I wanted to create a feeling of childhood innocence being lost. I started by recording a simple melody on a toy piano I found at a market in Phnom Penh.

Then, I degraded it. I ran it through a "Bit Crusher" to reduce the sample rate, making it sound like it was coming from an old Gameboy. Then, I ran it through a cassette simulation that introduced heavy "dropouts"—moments where the volume randomly cuts to zero for a millisecond.

The result is a melody that sounds like a damaged home video. It sounds sweet, but it also sounds broken. It forces the listener to "lean in" to hear the melody through the damage. That active listening creates a much deeper emotional bond than a perfectly polished, crystal-clear piano ever could.

Conclusion: The Beauty of Decay

We live in a high-definition world. Our phone screens are retina-sharp. Our internet is 5G. Everything is instant and clean. But humans are not high-definition. We are messy. We age. We fade.

The reason we crave Lofi, Retro, and "Nostalgic" sounds is that they reflect our own mortality. A cracking vinyl record sounds alive because it is dying. A warped tape sounds human because it is imperfect.

With Tos Connect, I am not trying to build a perfect digital monument. I am trying to build a ruin. I want my music to sound like it has already lived a life before it reached your ears. I want it to sound like a memory you didn't know you had.

Enjoying the music?

If these production notes helped you, consider fueling the next session.

☕ Buy me a Coffee
Dec 07, 2025 Production Strategy

The Architecture of Silence: Why Empty Space is the Most Important Instrument

When I first started producing music, I was terrified of silence. I treated a song like a suitcase I was trying to pack for a long trip—I stuffed every single corner with sound. If there was a gap in the frequency spectrum, I added a synth pad. If there was a lull in the rhythm, I added a shaker loop. I layered five kick drums because I thought that made the beat "bigger."

The result? A muddy, claustrophobic wall of noise. It didn't sound "big"; it sounded small, because there was no room for the music to breathe.

Over the years, the defining characteristic of the Tos Connect sound has become the opposite: The Architecture of Silence. I have learned that what you don't play is just as important as what you do play. In Electronic Soul, silence isn't just the absence of noise; it is a physical instrument that creates tension, impact, and intimacy.

The "Wall of Sound" Trap

In the 1960s, producer Phil Spector invented the "Wall of Sound"—a technique of layering dozens of instruments to create a dense, orchestral roar. It worked for 60s pop radio. But in modern electronic music, the Wall of Sound is often a trap.

We listen to music differently now. We listen on earbuds, in cars, and on laptops. When you cram 50 tracks into a mix, digital compression crushes them together. The listener's brain gets overwhelmed. There is no focal point. It’s like trying to listen to five people shouting at once.

My production philosophy shifted when I started studying visual design. In design, "Negative Space" (the white space around a logo) draws the eye to the subject. In music, silence draws the ear to the melody. By removing the clutter, you make the remaining elements feel massive.

The Mute Button as a Composition Tool

One of my favorite production techniques is "Destructive Arranging." I will build up a chorus until it is full and energetic—drums, bass, keys, lead synth, backing vocals. Then, I will duplicate that section and force myself to mute 50% of the tracks.

I ask myself: "Can this chorus survive with just the Bass and the Vocal?"

Often, the answer is yes. And not only does it survive, it thrives. When you strip away the decorative synths, the relationship between the bass and the voice becomes intimate. It forces the listener to pay attention to the lyrics. In No Hard Work, the second verse drops down to almost nothing—just a kick drum heartbeat and a dry vocal. That emptiness makes the listener lean in. It creates a feeling of vulnerability that a "full" arrangement would hide.

Reverb Needs Room to Die

We often use reverb to make things sound "epic." But reverb takes up space. It is a dense cloud of frequencies. If you have a long reverb tail on a snare drum, but you also have a busy hi-hat pattern playing over it, the reverb gets masked. It just becomes mud.

To make a reverb sound huge, you need to give it silence to expand into. This is why "Stop-Start" arrangements work so well. In my track The Anchor and the Kite, there are moments where the entire band stops dead on beat 4. In that sudden silence, you hear the reverb tail of the snare drum ringing out into the void.

That moment of "decay" gives the listener a sense of the room. It tells them: "We are in a massive, lonely space." If the music had kept playing, that spatial cue would be lost.

The Psychology of the Drop

In EDM (Electronic Dance Music), the "Drop" is the moment of maximum energy. But in Electronic Soul, I often use what I call the "Anti-Drop."

The listener is conditioned to expect the chorus to get louder and busier. I like to subvert that expectation. Sometimes, when the chorus hits, I take elements away. I might drop the drums out entirely and just leave a wall of sub-bass and vocals.

This creates a physical sensation of "freefall." It’s that feeling in your stomach when an elevator drops suddenly. It forces the listener to re-calibrate. Instead of dancing, they are floating. It shifts the focus from the body (dancing) to the head (thinking).

Case Study: "Blue Light Silhouette"

Let's look at my track Blue Light Silhouette. The main hook of the song isn't a melody; it's a rest.

The synth riff plays a pattern: Da-da-da... [Silence] ... Da-da-da.

That silence is rhythmic. It is a "ghost note." If I had filled that gap with an echo or a drum fill, the riff would lose its bounce. The groove exists because of the empty space, not despite it. It is the audio equivalent of a funk drummer playing "in the pocket." It’s about where you don't hit the drum.

Conclusion: Confidence in Minimalism

Ultimately, filling a track with noise is a symptom of insecurity. We layer sounds because we are afraid the song isn't interesting enough on its own. We are afraid of boring the listener.

Embracing silence requires confidence. It requires you to say: "This melody is strong enough to stand naked. This lyric is heavy enough to hold your attention without a drum beat."

So, the next time you are working on a track and it feels like "something is missing," don't add another track. Try muting one. Carve out a hole in the frequency spectrum. Let the reverb tail die out. Give the listener a moment of silence to process what they just heard.

In a world that never stops screaming, silence is the loudest sound you can make.

Enjoying the music?

If these production notes helped you, consider fueling the next session.

☕ Buy me a Coffee
Dec 06, 2025 Performance Psychology

The Spotlight Paradox: Overcoming Stage Fright in an Electronic World

The studio is a womb. It is safe, dark, and private. If I sing a flat note, nobody hears it. I just press "Command+Z" (Undo), and the mistake vanishes from history. I can spend three hours perfecting a single snare hit until it sounds exactly like it does in my head.

The stage is the opposite. It is bright, loud, and public. There is no "Undo" button. If I mess up, it happens in real-time, in front of strangers who are judging me. For an introverted electronic producer like me, the transition from the bedroom studio to the spotlight feels less like a career step and more like a violation of nature.

This is the story of my battle with stage fright, and the psychological tools I use to turn panic into performance.

The "Button Pusher" Syndrome

Electronic musicians suffer from a specific type of Imposter Syndrome that rock bands don't understand. If a guitarist breaks a string, everyone sees it. It is a heroic struggle. But if a laptop crashes, the music just stops. You are just a guy standing there looking at a screen.

There is a fear that the audience thinks you are doing nothing. "Is he actually playing that? Or is he just checking his email?" This anxiety used to paralyze me. I felt like I had to over-perform—twisting knobs frantically just to prove I was working.

I realized that this fear was actually blocking my connection with the crowd. I was so busy trying to look busy that I forgot to look at the people. The solution wasn't to do more; it was to do less, but with intention. Now, when I trigger a scene in Ableton, I do it deliberately. I let the crowd see the cause and effect. I realized they aren't there to audit my technical skills; they are there to feel the music.

The Biology of Panic

Stage fright isn't "all in your head." It is a biological survival mechanism. It is the "Fight or Flight" response. Your brain sees a room full of staring eyes and interprets it as a threat, like a pack of wolves. It dumps adrenaline into your bloodstream.

Your hands shake (making it hard to play keys). Your mouth goes dry (making it hard to sing). Your digestion stops (giving you that "butterflies" feeling). This is your body trying to save your life.

The breakthrough for me came when I stopped trying to fight the adrenaline. You cannot calm down by willing yourself to be calm. Instead, I learned to "reframe" the feeling. In psychology, anxiety and excitement are almost the exact same biological state (high heart rate, high energy). The only difference is the story you tell yourself.

Now, when my hands start shaking backstage, I don't say, "I am terrified." I say, "My body is giving me extra energy for the show." I use that shake to add vibrato to my voice. I use that energy to jump around. I surf the wave instead of drowning in it.

The Mask vs. The Man

David Bowie had Ziggy Stardust. Beyoncé had Sasha Fierce. Almost every great performer creates a "persona" to protect themselves. It isn't fake; it is a shield.

When I step on stage as Tos Connect, I am not the same guy who worries about paying bills or feels awkward at parties. I step into a version of myself that is confident and mysterious. I wear specific clothes that I only wear on stage. It is a uniform. Putting on that jacket is a ritual that signals to my brain: "It is time to work."

This separation helps with the fear of judgment. If the crowd doesn't like the show, they aren't rejecting me as a human being. They are just not vibing with the "Tos Connect" performance. It creates a healthy emotional distance that keeps me sane.

The Ritual of the First 10 Seconds

The scariest part of any show is the silence before the first note. The anticipation is suffocating. To combat this, I have engineered the first 10 seconds of my set to be automatic.

I don't start with complex improvisation. I start with a pre-programmed intro sequence—a swelling drone that builds tension. This buys me time. I can walk to the mic, take a deep breath, check my levels, and get grounded while the sound fills the room.

By the time I have to actually play a note or sing a word, I am already inside the sound world. I have established the atmosphere. I invite the audience into my house, rather than feeling like I am intruding on theirs.

The Value of Vulnerability

During a show in 2024, my MIDI controller completely died in the middle of The Lie Was Worse. The music cut out. The lights went dark. It was my worst nightmare coming true.

Five years ago, I would have panicked and walked off stage. But this time, I just grabbed the microphone. I laughed. I told the crowd, "Well, that's what happens when you trust robots too much." I sat at the piano and played the song acoustic while my tech rebooted the system.

And you know what? It was the best moment of the night. The crowd cheered louder for that acoustic version than for the lasers and the bass drops. They saw the human behind the machine. They saw me fail, and they saw me keep going.

Conclusion: Embrace the Shake

If you are an aspiring artist terrified of performing, know this: The fear never fully goes away. Even the biggest stars in the world get nervous. And honestly, you shouldn't want it to go away.

The fear means you care. It means the stakes are high. It means you are about to do something brave. A performance without nerves is usually boring. It feels rehearsed and flat. The "shake" is where the magic lives.

So, let your hands tremble. Let your heart race. Step into the light and let it burn. It’s better to be scared and loud than safe and silent.

Enjoying the music?

If these production notes helped you, consider fueling the next session.

☕ Buy me a Coffee
Dec 05, 2025 Lifestyle & Career

The Double Life: How to Balance a 9-to-5 Job with a Music Career

There is a pervasive myth in the music industry that goes something like this: "If you were a real artist, you would quit your job, sleep on a couch, and dedicate 100% of your time to music. Anything less is a lack of commitment."

This narrative is not only toxic; it is financially ruinous. It ignores the reality of the modern economy, especially for independent artists building a career from the ground up. The truth is, most of your favorite "overnight success" stories had day jobs for years before you ever heard their names.

For the entire existence of Tos Connect, I have lived a double life. By day, I work a normal job. By night, I enter the studio to build the universe of Electronic Soul. This isn't something I hide; it is something I embrace. Here is why keeping your day job might actually be the best thing for your music, and how to manage the burnout that comes with burning the candle at both ends.

The "Angel Investor" Mindset

The moment I stopped resenting my day job was the moment my music started getting better. I used to look at my 9-to-5 as a prison that was keeping me from my "real" life. Every hour at the office felt like an hour stolen from the studio.

Then, I shifted my perspective. I stopped viewing my employer as a jailer and started viewing them as my Angel Investor. My job isn't a trap; it is the funding source for my startup. My salary pays for my studio monitors. It pays for my Spotify distribution fees. It pays for the targeted ads I run on Instagram.

Because I have a steady income, I don't have to desperate. I don't have to take bad gigs just to pay rent. I don't have to write cheesy pop jingles to make a quick buck. My "investor" (my job) covers my living expenses so that Tos Connect can remain pure art. This financial freedom buys me creative freedom. If the music makes $0 this month, I still eat. That lack of pressure allows me to take risks that a "full-time" starving artist couldn't afford to take.

Managing the "Switching Cost"

The hardest part of the double life isn't the lack of time; it is the lack of energy. It is the "Switching Cost."

After eight hours of spreadsheets, meetings, and office politics, your brain is fried. Transitioning from "Corporate Mode" to "Creative Mode" is difficult. You sit down at the DAW, but your mind is still replaying an argument you had with a coworker. You stare at the screen, uninspired.

To combat this, I developed a "Transition Ritual." I never go straight from work to music. I need a buffer. Usually, this involves a change of environment and a sensory reset. I might take a 20-minute motorbike ride through Phnom Penh (the chaos of the traffic clears my head). I might take a cold shower. I might meditate for ten minutes.

This ritual signals to my brain: "The workday is over. The artist shift has begun." It creates a boundary. Without that boundary, the stress of the day bleeds into the music, and the music becomes just another chore on the to-do list.

The 5 AM Club vs. The Night Shift

There are two types of artists with day jobs: The Early Risers and the Night Owls. I have tried both.

The "5 AM Club" suggests waking up two hours before work to produce. The logic is sound: you give your best creative energy to yourself first, before the job drains you. I tried this for a month. While I was productive, the music sounded... efficient. It lacked soul. Electronic Soul is nocturnal music. It needs the darkness.

I accepted that I am a Night Shift producer. I do my best work between 10 PM and 2 AM. Yes, this means I am often tired the next day. Yes, it means I drink too much coffee. But there is a specific magic that happens when the rest of the city is asleep. The notifications stop. The emails stop. The world gets quiet, and the "Blue Light Silhouette" feeling takes over.

You have to find the chronotype that fits your biology. Don't force yourself to be a morning person if your muse wakes up at midnight.

Consistency Beats Intensity

When you have limited time, you feel pressure to make every session a marathon. You think, "I only have Saturday free, so I must work for 12 hours straight."

This is a recipe for burnout. I have learned that frequency is more important than duration. I would rather work for 45 minutes every night than 8 hours once a week. Touching the music every day keeps the subconscious connection alive. It keeps the "muscle memory" of the track fresh in your mind.

I use a technique called "The Non-Zero Day." The goal is simply to do something greater than zero. Maybe I just tweak a hi-hat pattern. Maybe I just write two lines of lyrics. Maybe I just organize my sample library. As long as I moved the needle forward by 1%, it was a successful day.

Using Frustration as Fuel

Ironically, some of my best songs were born out of frustration with my day job. The song No Hard Work is literally about the exhaustion of the grind. The Lie Was Worse channels the anger of professional betrayal.

If you are frustrated with your 9-to-5, don't suppress that feeling. Use it. Put that aggression into the bassline. Put that longing for escape into the reverb. Your life experience—even the boring, frustrating parts—is the raw material for your art.

The friction between the life you live and the life you dream of is where the spark comes from. If I were sitting on a beach all day with no responsibilities, I probably wouldn't write intense, moody electronic music. I write this music because I need an escape.

Conclusion: The Exit Strategy

Will I keep the day job forever? Maybe not. The goal is eventually for Tos Connect to become self-sustaining. But I am not in a rush to quit.

I will quit when the math makes sense. I will quit when the "side hustle" income eclipses the "main hustle" income. Until then, I wear the double life like a badge of honor. I am not a "wannabe" musician. I am a professional who is funding his own dream.

If you are reading this from your office desk, sneaking a look at your phone while your boss isn't looking: Keep going. You aren't falling behind. You are just building your foundation. The night shift is waiting for you.

Enjoying the music?

If these production notes helped you, consider fueling the next session.

☕ Buy me a Coffee
Dec 04, 2025 Workflow Strategy

Escaping the Loop: How to Cure "Demoitis" and Actually Finish Songs

Every music producer has a secret folder on their hard drive. It is a digital graveyard. Inside, you will find hundreds, maybe thousands, of project files with names like "Cool_Synth_Idea_V3" or "Sad_Piano_Beat_FINAL_REAL_2."

These are the ghosts of songs that never happened. They are 4-bar loops that sounded amazing for 20 minutes, but never grew up to become full tracks. In the industry, we call this sickness "Demoitis." It is the paralysis of perfectionism.

For the first two years of Tos Connect, I suffered from this chronic condition. I had 500 demos and zero releases. I realized that if I wanted to be an artist, and not just a "hobbyist loop-maker," I had to change my psychology. Here is the specific workflow I developed to stop hoarding ideas and start releasing records.

The Psychology of "The Loop"

Why is it so easy to make a loop and so hard to make a song? Because a loop is safe. A 4-bar loop exists in a state of infinite potential. In your head, you imagine it could turn into the greatest song ever written. But the moment you start arranging it—adding a B-section, writing a bridge—you have to make concrete decisions. And decisions act as limits.

We stop working because we are afraid of ruining the "vibe" of the loop. We are afraid that the finished reality won't match the fantasy in our heads. To finish music, you have to accept a brutal truth: A finished song that is 80% perfect is infinitely better than a perfect loop that nobody hears.

Technique 1: Subtractive Arrangement

Most producers try to arrange "Left to Right." They write an Intro, then a Verse, then a Chorus. This is like trying to build a bridge while you are walking on it. You run out of materials halfway across.

I switched to "Subtractive Arrangement." I treat the timeline like a block of marble. I take my main 8-bar loop (where every instrument is playing at once) and I paste it across the entire 3-minute timeline. Now, the song is "full" from start to finish.

Then, I start carving. I delete the drums from the Intro. I delete the lead synth from the Verse. I mute the bass in the Bridge. Instead of asking "What should I add next?", I ask "What can I remove?" This sounds simple, but it cured my writer's block overnight. It is easier to destroy than to create.

Technique 2: Commit to Audio (Burn the Bridges)

Modern DAWs (like Ableton or Logic) give us too many options. We keep MIDI tracks active so we can "change the synth patch later." We keep EQ plugins open so we can "tweak the snare later."

This endless tweakability is a trap. It keeps you in "Sound Design Mode" when you should be in "Songwriting Mode."

My rule for 2025 is: Commit to Audio. Once I have a cool synth sound, I "freeze and flatten" the track. I turn it into a WAV file. Now, I can't change the notes. I can't change the filter cutoff. I am stuck with it.

This sounds terrifying, but it is liberating. When you can't go back and fix the sound, you are forced to move forward and write the rest of the song. It forces you to work with what you have, just like the Beatles or Pink Floyd did with tape.

Technique 3: The "20-Minute" Challenge

I realized that I was spending 4 hours on a kick drum and 10 minutes on the melody. That is backwards. The listener doesn't hum the kick drum; they hum the melody.

To fix this, I gamified my studio time. I set a timer for 20 minutes. The goal: Lay out the entire structure of the song. Intro, Verse, Chorus, Verse, Chorus, Outro.

It doesn't have to sound good. It just has to be done. By forcing speed, I shut down my internal critic. I don't have time to second-guess if the transition works; I just put it there. Usually, the decisions I make in a panic are more instinctual and emotional than the decisions I make when I overthink.

The "Reference Track" Reality Check

Often, we get stuck because we lose perspective. After listening to your own snare drum for 6 hours, you lose the ability to judge if it is loud or quiet.

I always drag a professional track (a "Reference") into my project file. If I am working on a moody electronic track, I might import a song by James Blake or Bonobo. Every 15 minutes, I mute my mix and listen to theirs.

It is a humbling reality check. It instantly tells me: "Your bass is too muddy," or "Your vocals are too dry." It stops me from wandering down a rabbit hole of bad mixing decisions. It provides a "North Star" to navigate by.

The Final 10%: Knowing When to Quit

Leonardo da Vinci said, "Art is never finished, only abandoned."

There is no message that pops up on your screen saying "Song Complete!" You have to decide. I use the "Law of Diminishing Returns." In the beginning, one hour of work improves the song by 50%. By the end, one hour of work improves the song by 0.5%.

When I find myself turning a volume fader up by 1dB, then down by 1dB, then back up... I know I am done. I am no longer improving the art; I am just soothing my anxiety. That is the moment I export the file, upload it to DistroKid, and never touch it again.

Conclusion: Quantity Leads to Quality

There is a parable about a pottery class. Half the class was graded on the quantity of pots they made (50 lbs of pots = A). The other half was graded on the quality of a single pot.

The result? The "Quantity" group made the best pots. Why? Because they spent all day experimenting, failing, and learning from mistakes. The "Quality" group spent all day theorizing about the perfect pot and produced a mediocre lump of clay.

If you want to make great music, stop trying to make a Masterpiece. Just make a lot of music. Finish the track. Release it. Hate it later. Start the next one. That is the only path forward.

Enjoying the music?

If these production notes helped you, consider fueling the next session.

☕ Buy me a Coffee
Dec 03, 2025 Branding & Art

Seeing the Sound: Why Visual Identity is Just as Important as Audio Quality

We like to think that music is an auditory medium. As musicians, we spend thousands of hours training our ears, tweaking frequencies, and obsessing over the perfect reverb tail. We convince ourselves that if the music is good enough, nothing else matters.

But the reality of the digital age is that music is consumed with the eyes first. Before a listener on Spotify presses "Play" on No Hard Work, they see the artwork. Before they hear my voice on TikTok, they see my silhouette. If the visual doesn't intrigue them, the audio never gets a chance.

For Tos Connect, the visual identity—the stark black, the electric gold, the futuristic minimalism—wasn't an afterthought. It was developed alongside the music. Here is why building a cohesive visual world is the difference between being a "beatmaker" and being an "artist."

The concept of "Synesthesia"

There is a neurological condition called Synesthesia where the senses blend together. Some people literally "see" colors when they hear notes. While I don't have this condition medically, I believe every producer needs to develop it creatively.

When I am writing a song, I am constantly asking: "What color is this chord?"

For me, Electronic Soul is defined by two colors: Black and Gold.

Black represents the electronic elements. It is the void, the screen turned off, the isolation of the digital world. It is the bass frequencies that rattle your chest in a dark room.

Gold represents the soul. It is the filament in a lightbulb. It is the warmth of the human voice cutting through the cold darkness. It is valuable, conductive, and electric.

This color palette isn't just a design choice; it is a sonic map. When you visit this website, or look at my album covers, you intuitively know what the music is going to sound like before you hear it. You know it won't be a bubblegum pop song (Pink) or a country song (Brown). You know it will be dark, sleek, and electric. That is the power of branding.

The Album Cover as a Portal

In the vinyl era, album art was 12 inches wide. You could get lost in it. Today, on a smartphone notification screen, album art is often smaller than a postage stamp. This creates a design challenge: How do you communicate complex emotion in 50 pixels?

My strategy is Iconography. Instead of complex, busy scenes, I focus on singular, strong objects. An anchor. A kite. A silhouette. These images read clearly even at tiny sizes.

For the single The Anchor and the Kite, I could have used a literal photo of a boat and the sky. Instead, I used an abstract, high-contrast graphic. It forces the viewer to ask a question. "What is that shape?" That micro-second of curiosity is usually enough to stop the scroll and earn a click.

Consistency Builds Trust

If you look at the greatest electronic artists—Daft Punk, Justice, The Weeknd—they never break character. Their visual world is hermetically sealed. You never see The Weeknd posting a low-quality photo of his lunch. Every image reinforces the narrative.

For independent artists, this discipline is hard. We feel pressure to be "authentic" on social media, which often means being messy. But I believe there is a difference between "personal" and "private."

I share personal stories (like in this blog), but I present them in a curated, visual package. My Instagram feed, my YouTube thumbnails, and this website all use the same fonts, the same colors, and the same mood. This consistency builds trust with the audience. It tells them: "I take this seriously. This isn't a hobby. This is a world I have built for you."

DIY Design in 2025

Just like you don't need a million-dollar studio to make music, you don't need a design agency to build a brand. We live in the golden age of creative tools.

I do almost all my own design work. I use tools like Canva for quick layouts and Blender for 3D rendering. Blender is free, open-source software that allows you to create photorealistic 3D worlds. Many of the abstract shapes in my artwork are actually 3D models of sound waves that I sculpted digitally.

Learning basic design principles—hierarchy, negative space, typography—has made me a better producer. Arranging a song and arranging a poster are surprisingly similar. You are looking for balance. You are trying to guide the audience's attention to the most important element. In a song, it's the vocal. In a poster, it's the title.

The "Merch" Mindset

Finally, visual identity is the key to sustainability. In Article 9, I mentioned that streaming pays very little. Merchandise is the lifeblood of indie artists.

But nobody wants to buy a T-shirt with a bad logo on it. Fans buy merch because they want to wear the aesthetic. They want to signal to the world that they belong to a specific tribe. By investing time in making my "Visuals" cool, I am essentially doing R&D for future merchandise.

If the album art is beautiful enough to hang on a wall as a poster, people will buy the poster. If the logo is sleek enough to look like a streetwear brand, people will wear the hoodie. You aren't just selling music; you are selling an identity.

Conclusion: The Complete Package

We are long past the days where you could just "let the music speak for itself." The music is just the script. The visuals are the set design, the lighting, and the costumes.

If you are a musician reading this, take a look at your last three releases. Do they look like they came from the same person? Do they tell a story? If not, stop worrying about your snare drum sound for a moment and open a design program.

Give your music a face. Give your sound a color. When the eyes and the ears agree, the heart follows.

Enjoying the music?

If these production notes helped you, consider fueling the next session.

☕ Buy me a Coffee
Dec 02, 2025 Mixing & Engineering

The Dark Art of Mixing: Turning Binary Code into Emotion

There is a misconception that a song is "written" when the chords are played and the lyrics are sung. But in the world of Electronic Soul, the writing process doesn't end at the recording stage. It continues deep into the mixing stage.

Mixing is often viewed as a purely technical task—balancing volume levels, panning instruments left and right, and making sure the bass doesn't blow out the speakers. While that is true, I view mixing as "Emotional Architecture."

A good mix can make a sad song sound devastating. A bad mix can make an exciting song sound boring. For Tos Connect, the mix is where the "Soul" is actually injected into the "Electronics." Here is my philosophy on mixing, and the specific techniques I use to make cold, digital files feel like living, breathing entities.

The Problem with Digital Perfection

When you record into a modern DAW (Digital Audio Workstation) like Ableton Live, the audio is pristine. It is mathematically perfect. There is no noise floor. The frequency response is flat from 20Hz to 20kHz.

This sounds like a good thing, but "pristine" often translates to "sterile." Our ears are evolved to hear imperfections. In the real world, air absorbs high frequencies. Walls reflect sound unevenly. When we hear a sound that is too clean, our brain registers it as fake. It creates a subconscious distance between the listener and the music.

My goal in every mix is to dirty up the signal. I want to ruin the perfection. I want to introduce the chaos of the physical world back into the digital realm.

The Secret Weapon: Saturation

If compression is the heartbeat of a mix, Saturation is the body heat. Saturation is essentially "controlled distortion." It happens when you push an analog circuit (like a tape machine or a tube amp) slightly too hard.

In the digital world, I use saturation plugins on almost every single channel. I don't use them to make things sound "distorted" like a rock guitar. I use them to add "harmonics."

For example, a pure sine wave sub-bass is very hard to hear on laptop speakers or phones because those small speakers can't reproduce low frequencies. By applying saturation to the bass, I generate upper harmonics (copies of the note at higher frequencies). Suddenly, the bass "pops" out of the speaker. It becomes audible on an iPhone without losing its weight in a club system. Saturation is the glue that makes digital synths sound thick, warm, and expensive.

Compression as "Groove," Not Just Volume

New producers often use compressors just to tame loud peaks. But in Electronic Soul, compression is a rhythmic instrument. It changes the groove of the song.

I rely heavily on Sidechain Compression. This is the technique where the volume of the bass (or synths) ducks down every time the kick drum hits. In EDM, this is used aggressively to make people dance. I use it more subtly to create a "breathing" effect.

In The Anchor and the Kite, the entire synth pad layer is sidechained to a "ghost kick" (a kick drum that is muted). This causes the pads to swell and recede in a rhythmic pulse. It creates a feeling of ocean waves pushing and pulling. Even if the drums aren't playing, you can feel the rhythm inside the sustained notes. This keeps the listener's head nodding even in the ambient sections.

EQ: The Art of Carving Space

Imagine a mix is a small suitcase, and you are trying to pack clothes for a month. You can't just throw everything in; you have to fold, compress, and organize. That is what EQ (Equalization) does.

The biggest mistake I see in amateur mixes is "Frequency Masking." This is when two instruments are fighting for the same space. Usually, it's the Kick Drum and the Bass Guitar fighting for the low frequencies (60Hz - 100Hz).

My rule is: Only one captain per frequency range. If the Kick is the captain of 60Hz, I will use an EQ to cut 60Hz out of the Bass. If the Vocal is the captain of 3kHz (presence), I will cut 3kHz out of the guitars and synths.

This is called "Subtractive EQ." I rarely boost frequencies. I almost always cut. By carving out a hole for each element, you create clarity. You don't need to turn the vocals up to hear them; you just need to turn the other instruments down at the specific frequency where the vocals live.

Automation: The Human Touch

If you set your volume faders at the beginning of the song and never touch them again, your mix will sound dead. A real performance is dynamic. A drummer hits the snare harder in the chorus. A singer whispers in the verse.

Since I often program my drums, I have to fake this dynamism using Automation. I draw invisible lines in my DAW that tell the computer to move the volume faders up and down throughout the song.

I automate everything. I might turn the hi-hats up by 1dB in the chorus to add energy. I might pan a synthesizer slowly from left to right during the bridge to create a sense of disorientation. I might automate the "Dry/Wet" knob on a reverb plugin to make a vocal word echo out into the distance.

This is painstaking work. It takes hours. But it is the difference between a "loop" and a "song." Automation tells the listener a story. It guides their ear to exactly what I want them to focus on at any given second.

The "Car Test" Reality Check

You can have $10,000 studio monitors, but the most important listening environment in the world is a 2010 Toyota Camry. Why? Because that is where people actually listen to music.

Cars are terrible acoustic environments. They are made of glass and metal. The engine hums at 80Hz. The speakers are often down by your ankles. If your mix sounds good in a car, it is bulletproof.

Before I release any track, I burn it to a CD (or upload it to a private link) and go for a drive through Phnom Penh. I listen for specific things: Can I hear the lyrics over the traffic noise? Does the bass rattle the door panels too much? Does the snare drum hurt my ears at high volume?

The car never lies. If the mix falls apart on the highway, I go back to the studio and fix it. Mixing isn't done until the Toyota approves.

Conclusion: Mixing is the Final Performance

Ultimately, mixing is the final stage of performance. As the mixer, I am "performing" the faders. I am deciding how the audience experiences the emotion of the song.

It is a dark art. It requires patience, technical knowledge, and a lot of coffee. But when you get it right—when the bass hits your chest, the vocals touch your heart, and the drums make your neck snap—there is no better feeling in the world. It is the moment the code disappears, and only the feeling remains.

Enjoying the music?

If these production notes helped you, consider fueling the next session.

☕ Buy me a Coffee
Nov 29, 2025 Mastering

The Final Polish: Demystifying Mastering for the Bedroom Producer

If Mixing is "Emotional Architecture" (as we discussed last week), then Mastering is the varnish, the frame, and the lighting. It is the final step in the audio production process, and for most independent artists, it is also the most confusing.

There is a mythology around mastering. We imagine a wizard in a soundproof tower with million-dollar speakers, turning a mediocre song into a Grammy-winning hit with the turn of a single magical knob. Because of this myth, many bedroom producers are terrified to touch their master bus. They slap a limiter on it and hope for the best, or they pay a stranger $100 to do something they don't understand.

For Tos Connect, mastering isn't magic. It is quality control. It is the bridge between the studio and the listener. Here is a transparent look at how I master my own tracks for Spotify, and why you shouldn't be afraid of the "M-word."

What is Mastering, Actually?

Strip away the mystique, and mastering comes down to three goals:

  1. Consistency: Making sure your song sounds like it belongs on a playlist next to The Weeknd or Daft Punk. It shouldn't be drastically quieter or drastically brighter.
  2. Translation: Ensuring the song sounds good on everything—from a $50,000 club system to a $5 pair of earbuds.
  3. Safety: Ensuring there are no technical errors (like digital clipping or distortion) that will cause streaming services to reject the file.

If your mix is good, mastering should be subtle. If you find yourself needing to boost the bass by 10dB during mastering, you don't need a mastering engineer; you need to go back and fix the mix.

The Loudness Wars are Over (Sort Of)

For two decades, the music industry fought the "Loudness War." Everyone wanted their CD to be louder than the competition, so they crushed the audio with heavy limiting. The result was music that was loud, but lifeless. It had no dynamic range (the difference between the quiet and loud parts).

In the streaming era, the war has changed. Spotify and Apple Music use "Loudness Normalization." If you upload a track that is insanely loud, they will just turn it down to match everything else. If you upload a quiet track, they will turn it up.

This is liberating. It means I don't have to crush the life out of The Lie Was Worse just to compete. I master my tracks to around -9 to -10 LUFS (Loudness Units Full Scale). This is the sweet spot for Electronic Soul. It is loud enough to have impact in a club, but dynamic enough that the drums still punch.

The Mastering Chain: Less is More

My mastering chain is surprisingly simple. I believe that every plugin you add to the master bus degrades the signal slightly, so I only use what is absolutely necessary.

1. Subtractive EQ: I use a linear-phase EQ to cut out any "mud" below 30Hz. These are ultra-low frequencies that you can't hear, but they eat up "headroom" (energy). Removing them allows the limiter to work more efficiently.

2. Glue Compression: I use a bus compressor with a very slow attack and a fast release, doing only 1-2dB of gain reduction. This "glues" the mix together. It makes the drums and the synths feel like they are playing in the same room.

3. Saturation (Again): Just like in mixing, a tiny bit of tape saturation on the master adds "perceived loudness." It thickens the sound without raising the peak volume.

4. Limiting: This is the final safety net. The limiter catches the loudest peaks and prevents them from going over 0dB (digital distortion). I usually use two limiters in a row: one to catch the fast transients (drums) and one to just boost the overall volume. Doing it in stages sounds more transparent than making one plugin do all the heavy lifting.

The Power of "Reference Tracks"

You cannot master in a vacuum. Your ears adapt to whatever you are hearing. If you listen to a dull mix for an hour, your brain will trick you into thinking it sounds bright.

I always load a reference track into my mastering session. If I am mastering a dark, moody track, I might reference Limit to Your Love by James Blake. I compare my track to his instantly.

I ask specific questions: "Is my sub-bass as tight as his?" "Are my vocals sitting as forward as his?" This keeps me honest. It prevents me from releasing a track that sounds amateurish simply because my ears were tired.

The "Next Morning" Rule

I have a strict rule: I never master a song on the same day I mix it.

When you finish mixing, you have "mixer's ear." You have lost objectivity. You are too close to the details. If you try to master immediately, you will make bad decisions. You will boost the treble because your ears are fatigued.

I export the mix, sleep on it, and open the mastering session the next morning with fresh ears and a fresh cup of coffee. The difference is instant. I often hear problems in 10 seconds that I missed after 10 hours of mixing.

AI Mastering vs. Human Ears

Services like Landr or eMastered offer "AI Mastering." You upload a file, an algorithm analyzes it, and spits out a master in seconds.

Are they good? They are... okay. They are technically correct. But they lack context. An AI doesn't know that the drop in No Hard Work is supposed to feel "lazy." It might try to brighten it up and tighten the dynamics, ruining the emotional intent.

I believe every artist should learn to master their own music, at least to a basic level. Even if you eventually hire a pro, understanding the process gives you the vocabulary to tell them what you want. It demystifies the dark art.

Conclusion: Trust Your Taste

Ultimately, technical perfection is secondary to emotional impact. Some of the best records in history are "badly" mastered by modern standards. They are distorted, quiet, or muddy. But they feel right.

Don't obsess over the numbers. Don't stare at the LUFS meter until your eyes bleed. Close your eyes. Does the song move you? Does it sound like Tos Connect? If the answer is yes, print it. It's done.

Enjoying the music?

If these production notes helped you, consider fueling the next session.

☕ Buy me a Coffee
Nov 27, 2025 Artist Development

Killing Your Idols: The Painful Journey from Imitation to Originality

Every artist starts as a thief. This isn't an insult; it is a developmental necessity. When we first pick up an instrument or open a DAW, we do it because we want to sound like someone else. We want to be Daft Punk. We want to be The Weeknd. We want to be James Blake.

For the first three years of my musical life, I didn't exist. I was just a bad photocopy of my heroes. I spent hundreds of hours trying to recreate the exact synthesizer patch from a specific French House record. I tried to sing with the exact same inflection as my favorite R&B vocalists.

But eventually, you hit a wall. You realize that no matter how hard you try, you will never be as good at being Daft Punk as Daft Punk is. This realization is painful, but it is the most important moment in your career. It is the moment you have to "kill your idols" to find your own voice. Here is how I navigated the transition from imitation to Tos Connect.

The "Taste Gap"

There is a famous quote by radio host Ira Glass about the "Gap." He explains that when you start, your taste is killer, but your skills are weak. You know what good music sounds like, which is why you know your own music disappoints you. It doesn't sound like your heroes.

Many artists quit in this gap. They think, "I can't make it sound like the radio, so I must not be talented." But the gap is where the magic happens. The gap is where your "Sound" lives.

My sound—Electronic Soul—was born in that failure. I tried to make slick, perfect Pop music, but my production skills weren't polished enough. My drums were too gritty. My synths were too weird. Instead of "fixing" those mistakes to sound like the radio, I leaned into them. I stopped trying to be polished and started trying to be interesting.

The Venn Diagram of Influence

If you copy one person, you are a clone. If you copy five people, you are original. This is the "Venn Diagram" theory of creativity.

To find my own lane, I made a list of three disparate genres that usually don't go together:

  1. Detroit Techno: Repetitive, mechanical, industrial.
  2. 1970s Soul: Warm, emotional, human vocals.
  3. Ambient/Cinematic: Spacious, reverb-heavy, no beat.

By forcing these three circles to overlap, I found a small sliver of unchartered territory. If I just made Techno, I would be competing with thousands of Techno producers. But by making "Ambient Soul Techno," I created a micro-genre where I could be the king. Tos Connect exists in the friction between those genres.

Embracing Your Limitations

We often think our unique sound comes from our strengths. I argue it comes from our weaknesses. What you can't do is just as defining as what you can do.

For example, I am not a virtuoso keyboard player. I can't play fast, complex jazz solos like Herbie Hancock. Because of this limitation, I am forced to write simple, slow melodies. I rely on texture and sound design to keep it interesting, rather than flashy playing.

This limitation became a signature style. The "Tos Connect Sound" is slow, deliberate, and textural. If I were a better piano player, my music might actually be more generic because I would be tempted to overplay. Your inability to mimic your heroes perfectly is actually your brain trying to create something new. Listen to it.

The Vocal Fingerprint

The hardest idol to kill is the Vocal Idol. We subconsciously mimic the singers we admire. I used to try to sing with a lot of vocal runs and melisma because I thought that's what "Soul" singers did.

But that wasn't my voice. My natural speaking voice is lower, quieter, and more conversational. When I tried to belt high notes, it sounded strained and fake. The breakthrough came when I recorded No Hard Work. I was tired. I didn't have the energy to "perform." I just sang the lyrics exactly how I would speak them to a friend in a dark room.

When I listened back, I got chills. It didn't sound like a "Singer performing a song." It sounded like a person telling the truth. That "Conversational Tone" became the anchor of my vocal identity. Stop trying to sound pretty. Try to sound like you.

Steal the Thinking, Not the Sound

When artists say "Good artists borrow, great artists steal," they don't mean stealing melodies. They mean stealing the thinking process.

Instead of asking, "What synth preset did they use?" ask, "Why did they choose to put a synth there?"

When I listen to a Prince record, I don't try to copy his drum sound (which is impossible). I copy his bravery. I copy his willingness to leave the bass guitar out of a song entirely (like in When Doves Cry). I steal his philosophy of minimalism. Applying Prince's philosophy to my own electronic sounds creates something new. Copying his snare drum just creates a bad Prince cover.

Conclusion: You Are the Niche

The algorithm wants you to fit into a neat little box. It wants you to be "Lo-Fi Beats to Study To" or "Sad Indie Pop." But the artists we remember forever—David Bowie, Björk, Frank Ocean—never fit in boxes. They built their own boxes.

Killing your idols is terrifying. It means stepping off the paved road and walking into the jungle with a machete. You will make bad music for a while. You will feel lost. But eventually, you will stumble upon a sound that feels familiar, yet completely new.

You will realize it sounds like you. And once you hear that, you can never go back to being a copy.

Enjoying the music?

If these production notes helped you, consider fueling the next session.

☕ Buy me a Coffee
Nov 25, 2025 Philosophy & Culture

The Lost Art of Deep Listening: Reclaiming Attention in the Age of the Scroll

We are consuming more music than any generation in history. We have access to 100 million songs in our pockets. We play music while we drive, while we work, while we workout, and while we cook. Music has become the soundtrack to our lives.

But there is a difference between hearing and listening. We have become experts at "Passive Listening"—treating music as a utility, a tool to drown out silence or boost productivity. We have forgotten "Deep Listening"—the act of giving a piece of music 100% of our attention, doing nothing else but existing inside the sound.

As the creator of Tos Connect, my goal isn't just to make you nod your head while you answer emails. My goal is to invite you back into the practice of Deep Listening. Here is why reclaiming your attention is the most radical act you can perform in 2025.

The Economy of Distraction

We live in an Attention Economy. Every app on your phone—Instagram, TikTok, YouTube—is engineered by thousands of PhDs to steal your focus. They fragment your attention span into 15-second chunks. This fragmentation is the enemy of art.

Electronic Soul is designed to be "Anti-Algorithm." It is slow. It creates space. It requires patience. When you listen to a track like The Lie Was Worse, there are layers of texture buried deep in the mix that you will literally never hear if you are scrolling through a feed at the same time. You miss the ghost notes. You miss the subtle panning. You miss the emotion.

Deep Listening is an act of rebellion. It is a refusal to let the algorithm dictate your pace. It is saying: "For the next four minutes, I choose to be here, and nowhere else."

The Ritual of the Album

In the streaming era, the "Playlist" has killed the "Album." We curate vibes—"Chill Beats," "Gym Hype," "Sad Hours." We strip songs out of their context and shuffle them together.

But an album is like a novel. You wouldn't read Chapter 14 of a book, then Chapter 3 of a different book, then Chapter 8 of a third book. You would lose the narrative. Music is the same. I spend months sequencing the tracklist of a Tos Connect EP. The key of Song A flows into the key of Song B. The lyrical themes evolve from despair to hope.

To practice Deep Listening, try this experiment: Pick an album (it doesn't have to be mine). Put your phone in another room. Turn off the lights. Play the album from Track 1 to the end, without skipping. It will feel uncomfortable at first. Your brain will itch for dopamine. But after 10 minutes, you will sink into a state of immersion that feels like meditation.

The Physicality of Sound (Why Audio Quality Matters)

We have traded quality for convenience. Bluetooth is amazing, but it compresses audio. It throws away data to make the file smaller. When you listen to a low-quality MP3 on cheap earbuds, you are looking at a pixelated photo of the Mona Lisa.

I encourage my listeners to invest in wired headphones or good speakers. You don't need to spend thousands. A $100 pair of wired studio headphones will reveal details you never knew existed. You will hear the breath before the lyric. You will feel the sub-bass in your jaw, not just your ears.

There is a physical sensation to high-fidelity audio. It stimulates the vagus nerve. It lowers cortisol. When the sound is full and uncompressed, it wraps around you like a weighted blanket. That physical embrace is what we are missing in the digital age.

Sonic Fasting: Resetting Your Ears

Just as we can overeat, we can "over-listen." If you blast noise into your ears for 12 hours a day, your hearing fatigues. You lose sensitivity to dynamics. Everything starts to sound flat.

I practice "Sonic Fasting." I try to spend at least one hour a day in total silence. No podcasts, no music, no TV. Just the sound of the room. This resets my baseline.

When you break the fast and finally play a song, the impact is explosive. Colors seem brighter. Emotions hit harder. By depriving yourself of sound for a short time, you remind yourself of its value. You stop taking it for granted.

Active Listening Techniques

How do you actually "Deep Listen"? Here is a technique I use called "The Spotlight Method."

Play a song you love. On the first listen, focus your mental spotlight only on the Bass. Ignore the singer. Just follow the bassline. Notice how it interacts with the kick drum. Notice when it stops.

On the second listen, move the spotlight to the Reverb. Don't listen to the instruments; listen to the space around them. Is it a small room? A large hall? A cave?

On the third listen, focus on the Emotions. How does this chord change make your body feel? Does your chest tighten? Do your shoulders drop?

This active engagement turns listening from a passive consumption into a creative act. You are participating in the music.

Conclusion: The Invitation to Slow Down

We are racing toward a future of AI-generated content, metaverse concerts, and instant gratification. Speed is the currency of the new world. But art operates on a different currency: Resonance.

You cannot speed-run a feeling. You cannot optimize a cry. Tos Connect is my attempt to build a sanctuary of slowness. It is an invitation to stop running.

Thank you for reading these 20 articles. Thank you for exploring the lyrics, the production, and the philosophy behind the music. But now, the reading is done. The theory is over.

Put down the phone. Close your eyes. Press play. Let's connect.

Enjoying the music?

If these production notes helped you, consider fueling the next session.

☕ Buy me a Coffee
Nov 20, 2025 Music Theory & Psychology

The Comfort of the Minor Key: Why Sad Music Makes Us Feel Good

If you look at the Top 40 charts, most pop songs are in Major keys. They are bright, happy, and energetic. They are designed to make you dance or smile. But if you look at the comments section of a Tos Connect track, or any Electronic Soul record, you see a different kind of reaction. You see people saying, "This track understands me," or "This is my 3 AM safe space."

My music is almost exclusively written in Minor keys. To a music theorist, minor keys are "sad." But to the listener, they often feel "comforting." This creates a paradox: Why do we run to "sad" music when we want to feel better?

The answer lies at the intersection of biology, psychology, and acoustics. Here is why the darkness of the minor key is actually a source of light, and how I use harmonic theory to build a sanctuary in sound.

The Biology of Catharsis

Scientists have actually studied what happens to the brain when we listen to melancholic music. They found that it triggers the release of Prolactin—a hormone usually released after crying or during sleep. Prolactin produces a feeling of calmness and tranquility.

When you are stressed or anxious, your body is in a state of high alert. "Happy" music can sometimes feel irritating in this state because it creates a dissonance between how you feel (bad) and what you hear (good). It feels like toxic positivity.

However, "Sad" music validates your internal state. It matches your vibration. This validation allows your brain to process the negative emotion and release it. This is called Catharsis. In The Lie Was Worse, the dissonance in the chords isn't meant to depress you; it is meant to draw the poison out. It is an audio detox.

The "Safe Space" Theory

There is a psychological concept known as "Vicarious Emotion." When you watch a horror movie, you feel fear, but you know you are safe in a movie theater. Because there is no real threat, the fear becomes thrilling.

Sad music works the same way. It allows you to experience the depth of heartbreak, loneliness, or grief within a safe, controlled environment. You can dip your toe into the void without falling in.

I design my soundscapes to function as this "Safe Container." I use warm, analog synth pads that wrap around the stereo field like a blanket. Even if the lyrics are devastating, the texture of the sound is holding you. It creates a space where it is okay to not be okay.

Beyond "Happy" and "Sad": The Dorian Mode

Not all minor keys are created equal. If you just play a natural minor scale, it can sound overly dramatic or "funeral-like." That isn't the vibe of Electronic Soul. I rarely use the Natural Minor.

My secret weapon is the Dorian Mode. In music theory, Dorian is a minor scale with a raised 6th note. That single note changes everything. It adds a flavor of "hope" to the sadness. It sounds sophisticated, jazzy, and soulful.

Tracks like Blue Light Silhouette rely heavily on Dorian harmony. It creates a feeling of "sweet sorrow." It captures the feeling of walking through a city at night—it’s lonely, yes, but it’s also beautiful. That mixture of bitter and sweet is much more realistic to the human experience than pure major (happy) or pure minor (sad).

The Interval of Longing: The Minor 9th

If you want to know the specific sound of "yearning," it is the Minor 9th interval. This happens when you play a minor chord but add a note that is 14 semitones above the root.

This interval creates a friction that creates a physical sensation of "reaching." It sounds like a question that hasn't been answered yet. I use Minor 9th chords on my pads constantly. They never feel fully resolved. They float.

This lack of resolution keeps the listener engaged. If a song resolves perfectly to the home chord every four bars, your brain checks out. It knows the story is over. But if the chords keep "reaching" without ever fully landing, your brain stays locked in, waiting for an answer. That tension is the engine of the song.

Tempo and Heart Rate

The "sadness" of a track isn't just in the notes; it's in the speed. Most modern pop music sits around 120-128 BPM (Beats Per Minute), which is designed to raise your heart rate for dancing.

I tend to write in the 80-100 BPM range. This is significant because it is close to the resting heart rate of a relaxed human. When you listen to music at this tempo, your body naturally tries to sync up with it. It acts as a biological regulator.

In No Hard Work, the groove is lazy. It sits slightly behind the beat. This drags the listener's internal clock down. It forces you to slow down your breathing. It is literally anti-anxiety medication in audio form.

Conclusion: Darkness Defines the Light

We are often told to "look on the bright side." But you cannot see the stars unless it is dark. Electronic Soul is about embracing the darkness, not to wallow in it, but to find the beauty inside it.

The minor key isn't a place of despair; it is a place of honesty. It is the sound of the mask coming off. And in a world where everyone is pretending to be happy all the time, honesty is the most comforting feeling there is.

So, turn down the lights. Put on your headphones. Let the minor chords wash over you. You aren't sad; you are just feeling deep.

Enjoying the music?

If these production notes helped you, consider fueling the next session.

☕ Buy me a Coffee