WEEKLY LEARNING

 

Instruction

sonic design.pdf,作者 tian dong 

WEEK 1

This class focuses on a sonic design project. The core idea is to create an ambient auditory experience through sound, allowing the audience to imagine the scene (e.g., "walking from a location to a coffee shop") solely through sound, rather than relying on visual elements.
  • Key Project Requirements and Principles
1. Core Principle: Audio Dominance, Minimize Interference
  • Music is prohibited in this project, as it easily becomes the focal point and obscures the narrative function of ambient sound. We need to focus the audience on the sounds of the environment itself (e.g., traffic, crowds, nature, etc.).
  •  Avoid excessive use of voice dialogue; voice should only serve as a narrative element and not dominate the scene. Also, cartoons and flashing visual effects are not recommended; visuals should only be supplementary, with the focus always being on the sound quality.
2. Scene Design Requirements
  • The scene should be specific, such as "walking to a coffee shop in the city" or "market environment." It should include both background sound and dynamic elements (e.g., "traffic outside the window and the sounds of people chatting nearby").
  • The scene must have an "auditory imagery," allowing the audience to associate space, action, and atmosphere through sound. For example, the sound of footsteps, the doorbell, and the clinking of coffee cups can create a complete scene of "entering a coffee shop."
  • Project Structure and Arrangement
1. Project Module Division
  • The course involves two main projects, both centered around "building an environment through sound," but with different scenarios and technical requirements (the specific name of the second project is not specified; the instructor mentioned that "the second project is a different spatial model").
  • The project consists of "basic exercises + complex tasks": first, master sound acquisition and editing skills through basic exercises (the exercises have a high score rate, 80%-90% achievable), and then integrate these basic skills into complex scene design.
  • Time and Schedule
- There will be a 10-minute group discussion/practical session at the end of the course (the instructor has not yet fully specified whether the specific format will be WeChat chat or group tasks).

- Practical exercises will be included next week, related to "original sound acquisition and basic sound editing." You should prepare your ideas for the "environmental scene" in advance.
  • Tools and Technical Support
Core Software:

- Adobe solutions are primarily used (specific software is not specified; a subscription is required).

- Alternative software is "deeper" (similar functionality, simpler to use, suitable for those without an Adobe subscription).

Equipment Recommendations:

- Ordinary headphones may not accurately reproduce sound quality; calibrated professional equipment is recommended.

- If budget is limited, consider cost-effective audio equipment (such as the "HD 662 or 802" model mentioned by the instructor, but be careful to avoid treble overload).
  • Other Important Course Content
1. Homework and Practice Tips
  • Basic Exercises (practical in class): Focus on "sound collection and classification." For example, recording background sounds for three different scenarios, distinguishing between "fixed background sounds" (such as air conditioning) and "dynamic sounds" (such as footsteps).
  • Research ambient sound references online in advance, but be sure to "select two core reasons" to explain your selection (avoid aimlessly cluttering the source material).

2. Notes
  • Project submissions must include a "scene description + sound source material + a brief design brief" in a consistent format (the instructor mentioned "GAS format," details of which will be provided later);
  •  Avoid directly copying existing audio materials; incorporate your own audio collection and editing to demonstrate originality.
  • If using sound sources from others (e.g., character dialogue), ensure compliance and avoid copyright issues.
  • To be clarified/added later
1. The specific name and scene requirements for the second project (the instructor mentioned "different spatial models," which will be explained in detail next week);

2. Detailed Adobe software tutorials (the instructor will demonstrate tool setup in a subsequent class);

3. Project budget details (the instructor mentioned "a separate project budget statement will be provided").

WEEK 2-Sound Fundamentals 

The teacher did not give a lecture this week, so I did some independent study after class.

During this week's study, I will understand the nature of sound, how it is captured and analyzed, digitally processed, and edited using Pro Tools.
Key topics include:

• Sound generation, propagation, and perception

• The structure of the human ear and the principles of hearing

• Psychoacoustics

• The physical properties of sound waves (wavelength, frequency, amplitude, etc.)

• The main properties of sound (pitch, loudness, timbre, spatial perception, etc.)

Nature of Sound

Sound is the vibration of air molecules.

• Production: Produced by the vibration of an object.

• Propagation: Transmitted through a medium (air).

• Perception: Received by the ears and interpreted as sound by the brain.

➡️ Sound waves are the propagation of pressure changes through air.

The structure of the human ear (Human Ear)

It is divided into three types: the outer ear, the middle ear, and the inner ear.

The outer ear consists of the auricle and the ear canal, which collect sound waves.

The middle ear consists of the eardrum and the three ossicles (malleus, incus, and stapes).

The inner ear consists of the cochlea and the semicircular canals, which convert sound waves into nerve signals.


The human ear is an extremely sensitive and complex sensory system.

Psychoacoustics

Psychoacoustics is the study of the human subjective perception of sound. This includes pitch, loudness, volume, and timbre. It explores how sound affects human emotions and nervous system responses.

Psychoacoustics helps us understand why certain sounds sound pleasant or unpleasant.

Properties of Sound Wave

1. Wavelength: The length of a wave.

2. Amplitude: The height of a wave, which determines the loudness of a sound.

3. Frequency: The number of times a wave occurs per unit time (measured in Hz or kHz). Higher frequency → higher pitch.

6 Properties of Sound


1. Pitch – Determined by frequency

2. Loudness—Determined by amplitude

3. Timbre—The "texture" or character of a sound

4. Perceived Duration – The length of time a sound is perceived

5. Envelope—The variation in sound energy over time

6. Spatialization—The direction and spatial location of a sound

Self-reflection

This week's self-study has given me a clearer understanding of the nature of sound—sound is essentially the vibration of air molecules, audible only through propagation and perception. I'm particularly interested in psychoacoustics because it reveals how sound affects our emotions and feelings. Experimenting with sounds of different frequencies has also given me a more intuitive understanding of the differences between pitch and loudness. I hope to better apply these fundamental concepts in practice, laying the foundation for future sound design.

WEEK 3—Sound Shaping 


This week's main course introduces sound shaping. The instructor explains that sound shaping is the process of refining and transforming audio to create a specific tonal character or sense of space. In this lesson, students will explore two key sound-shaping tools: parametric EQ and reverb. Parametric EQ allows precise control of frequency bands, allowing adjustments to make sounds brighter, warmer, or more focused. Reverb, on the other hand, enhances the sense of space and depth, helping to place sounds in specific environments—such as a small room, a large hall, or an open space. Together, these tools form the foundation of sound design, enabling students to shape how sounds are perceived and experienced.

The focus will be on the use of EQ. Unlike the tools discussed last week (Week 2), this week's course will use EQ to alter sound recordings to suit design goals. The course includes a frequency review, sound training lectures, and exercises. Students will master how to use EQ to adjust sound characteristics, such as simulating the volume of a telephone or the sound of an enclosed space. Limiters will also be explored to prevent audio distortion. Finally, they will complete a six-band sound simulation.

Self-reflection

This week, I learned how sound shaping allows us to refine and transform audio to create specific tonal qualities and spatial impressions. Through understanding parametric EQ and reverb, I realized how small frequency adjustments or added reverberation can completely change the atmosphere of a sound. The exercises helped me develop a more precise ear for tone and balance. I found it interesting how EQ can simulate different spaces, like a small room or a large hall. Moving forward, I want to keep practicing these tools to design sounds that better fit emotional and spatial contexts.

WEEK 4-Sound space and environmental sound design

This class mainly learns how to create a more three-dimensional and realistic sound space through "Automation" and "EQ adjustment" in audio design software (AU). The teacher not only explained the technical aspects, but also led us to think about the logic of sound narrative - sound is not just a material, but an important element in building space and atmosphere.

(The temporal structure and characteristics of sound)

The teacher first introduced the basic composition of sound - the whole process from "silence" to "maximum volume" to "disappearance", which is divided into the following stages: 

• Attack: The time it takes for a sound to go from rest to reaching its highest volume (for example, a plane taking off is slower, a drum beat is very fast). 

• Decay: The transition from peak to steady volume. 

• Sustain: The stage in which the sound remains stable. 

• Release: The process by which a sound gradually fades away.

Understanding these characteristics helps us judge the "sense of strength" and "sense of space" of different sounds.

(Automation automation control)

Then the teacher talked about the difference between Clip Automation and Track Automation: 

• Clip Automation: Bind to the audio clip, the effect will move together when the clip is moved; suitable for local special effects or volume changes. 

• Track Automation: works on the entire track and does not move with the clip; more suitable for scenes with a fixed timeline, such as video dubbing or fixed narrative structure.

We tried using automated adjustments in our exercises: 

• Sound source direction (left, center, right) 

• Volume fade 

• EQ (high and low frequency) changes

These can be used to simulate the feeling of sound "moving" in space, such as a person walking from left to right, or approaching from a distance.

(EQ and spatial distance)

The teacher demonstrated how to simulate the sense of distance through the EQ filter: 

• When the sound moves away: high frequencies will be weakened and overall more blurred; 

• When sounds are closer: high frequencies are clearer and louder.

This method makes the sound not just a "flat" existence, but can produce depth and spatial levels. For example, when a person walks into a cave, the sound will gradually become reverberated and have low-frequency resonance. This change can be achieved using EQ and reverb together.

(On-site control and dynamic recording)

The teacher also introduced the use of Write mode to make real-time adjustments.
By sliding the controller or mouse, you can directly record dynamic changes during playback, making the sound more natural and breathable. This is very useful when creating "ambient sounds" or "interactive scenes".

Reflection

Today’s class gave me a better understanding of the role of sound in spatial narrative. In the past, I only focused on the sound itself, but now I start to think about it **"where it comes from and where it goes"** - the direction, speed, and spatial relationship of the sound will all affect the audience's feelings.
Through practicing Automation and EQ, I feel like I'm starting to "sculpt" the sound rather than just play it.

WEEK 5-Explore the shapes and emotions of sound

🧠 Class Summary:

The teacher first introduced five basic sound design tools and concepts:

1. Layering



Just like layering in Photoshop in graphic design, sound design can create richer layers and a sense of space by layering different sound sources. A complete sound effect is often composed of multiple sounds.

2. Time Stretching


Changing the playback speed of a sound without affecting its pitch.

For example, stretching a 10-second speech to 20 seconds will make it slower and deeper;
compressing it to 6 seconds will make it faster and more rapid.

This is very useful for controlling rhythm or expressing the speed of a character's movements.

3. Pitch Shifting



Changing the pitch of a sound without affecting its duration.

• Raising the pitch → The sound becomes sharper, lighter, and more cute (commonly used for small animals or cartoon characters);

• Lowering the pitch → The sound becomes thicker, deeper, and more stable (commonly used for monsters, robots, etc.).

The teacher emphasized that pitch is closely tied to the character's image:

Small characters should have high pitches, while large creatures should have low pitches.

4. Reversing


Reversing a sound can create strange or unnatural aural effects.

For example, by duplicating an explosion and then overlaying it in reverse, you can create the sensation of "energy gathering and then exploding."

5. Mouth It!


If you can't find the right sound effect, record it with your mouth!

Sound designers often use beatboxing, breathing, or hissing to record the sound, then use tools like pitch shifting, reverb, and layering for post-processing.

The instructor also demonstrated how to apply these techniques in Adobe Audition (AU), including:

• Using the Stretch and Pitch Process to adjust both time and pitch simultaneously;

• Using the Limiter to limit volume peaks and avoid distortion;

• Reversing or mixing tracks in Waveform View;

• Layering different sound effects to enhance spatial perception through Multi-track Sessions.


🎬 Class Exercise Introduction

In class, we will complete Exercise 1:


Download the three sound effects and experiments with the sound design tools to achieve:

1. Variation of punch sound

2. Monster or alien voice

3. Deep rich explosion

🧩 My Production Process

① Variation of Punch Sound

I started with a basic punch sound, adjusted its timing and pitch using AU, and layered a low-frequency impact sound.

Pitch shifting lowered the pitch to make the punch sound more powerful, and then added a slight reverb to create the feeling of a close-range blow.


Track 1


Track 2




Track 3


Variation of Punch Sound recording


Variation of Punch Sound File


② Monster or Alien Voice


I used harmony, pitch shifter, delay, and parametric EQ.

Delay



Parametric Equalizer


Harmony


Pitch Shifter


Monster or Alien Voice Recording


Monster or Alien Voice File


③ Deep Rich Explosion


I experimented with creating multi-track explosion sounds.

The goal was to give a simple explosion sound depth, spatiality, and dynamics through layering, tuning, and effects.

The entire project used four tracks (Tracks 1–4), each responsible for a different sonic layer.

Track 1


Track 2


Track 3


Track 4


During the final mix, I adjusted the volume of each track (for example, boosting Track 2 by +3.8dB, Track 3 by +4.6dB) to ensure the layers complemented each other without overloading.
A Limiter was also used on the master mix track to control the overall output, ensuring the explosions retained their intensity without distortion.


The explosion erupted from nothing, gathering force and then erupting, with rich low-end, clear highs, a full midrange, and a natural overall spatial feel.

It sounded like a real explosion from a movie scene, brimming with energy and tension.

Deep Rich Explosion Recording


Deep Rich Explosion File


Reflection

This sound design exercise gave me my first real understanding of the complexity behind the construction of sound. I used to be mesmerized by explosions in movies, but after creating my own using Adobe Audition, I realized that a good explosion isn't just a single recording, but rather the result of the layering and modulation of multiple frequency bands and energy levels.

During the production process, I experimented with splitting a simple "lo-fi explosion" into four tracks:

The main explosion layer creates energy, the low-frequency layer adds depth, the high-frequency layer adds a sense of fragmentation, and the final reverse layer adds dynamics and emotional depth to the overall sound. The reverse effect, in particular, made me realize that sound design isn't just about embellishment; it can be a creative tool with narrative and rhythm.

Technically, I learned how to use parametric EQ to control the balance of different frequency bands, how to use hard limiters to prevent overload, and how to use reverb to create a sense of space. These detailed adjustments to the tools made me realize that sound design, like painting, is a meticulous craft of layering and atmosphere.

Overall, this exercise not only made me more familiar with using AUs, but also taught me to think about "space," "emotion," and "energy flow" at the aural level. In future sound-related projects, I'll focus more on integrating sonic storytelling and rhythm, rather than simply creating a "pleasing" sound.

WEEK 6-Learning audio editing and mixing techniques

This week's lesson focused on audio editing, mixing techniques, and optimizing project structure. The instructor demonstrated how to properly edit sound, remove noise, and use fade-in/fade-out and crossfade to improve project quality in Adobe Audition.


Audio Editing Basics and Important Considerations

The instructor first emphasized the details of audio editing—when cutting audio, you must ensure the cut point falls on the zero line of the waveform.

Cutting at a high or low point of the waveform will produce a "click noise," affecting the quality of the final product.

To avoid this problem, the instructor suggested adding a **fade in** and a **fade out** effect at the beginning and end of each audio clip. This will make the sound transition more naturally and smoothly.

💡 Tip: A slight fade is sufficient; an excessively long fade will weaken the sound's energy.

Crossfade and Audio Blending

Crossfade is an essential technique when connecting two audio segments.

By slightly overlapping the two audio segments and setting fade-in and fade-out at the overlap, the auditory transition is smoother, avoiding abrupt changes.

The instructor also specifically pointed out:

• If there is no automatic crossfade function, it can be achieved manually using multitrack overlay;

• All transitions should be smooth, without any "pops" or "jumps".

The Use of Bus Tracks and Mixing Logic

In multi-track projects, to make the mix clearer and more layered, the instructor explained the use of Bus Tracks.

You can group similar sound types on the same bus, for example:

• All ambient sounds in one group;

• All dialogue or main sound effects in another group.

With Bus Tracks, you can:

• Control the volume of the entire group uniformly;

• Add overall reverb or EQ effects;

• Use Automation to achieve dynamic volume changes.

This structured approach makes the entire audio design more professional and facilitates fine-tuning in post-production.

Volume Automation

The instructor specifically pointed out that many students' work lacked automation controls.

During the mixing stage, automation allows volume or panning to change over time, adding depth and dynamism to the audio.

For example:

• An ambient sound can gradually increase in volume as the story unfolds;

• The main track volume can automatically decrease after a dialogue ends, allowing background sounds to emerge.

This is the key difference between a "finished piece" and "practice editing."

Submission and Reflection Requirements

The teacher emphasized that assignments at this stage are still "practice-based projects," with the main goal of training editing and mixing skills.

Weekly tasks should include:

1. A completed audio practice piece;

2. A weekly reflection summarizing the production process and key technical points learned.

He encouraged us to record our learning experiences in a blog, as reflection helps us identify our progress and areas for improvement.

Reflection

This week’s Sonic Design class helped me understand the importance of small editing details in sound production.
Before this, I often focused on finding interesting sounds, but I didn’t realize how much difference a clean cut or smooth transition could make.

By learning how to use fade in/out, crossfade, and automation, I can now make my sounds blend better and feel more natural.
I also learned how to organize tracks using Bus Tracks, which makes the whole mix more balanced and easier to manage.

Through this exercise, I started to think more critically about how sounds connect and how the listener experiences them.
It’s not only about creating sound, but also about shaping emotion and rhythm through editing.
Next time, I want to pay more attention to automation and volume balance to make my mix more professional.

WEEK 7

The teacher provided feedback on each student's work, focusing on the specifications for sound space, volume levels, and surround sound output.

The teacher reminded everyone again to pay attention to the audio output format.

Regular projects use stereo (two channels), while spatial sound design requires 5.1 surround sound (Surround 5.1).

When exporting, please be sure to check that the channel distribution is correct and include your name in the filename, for example: Name_Project2.mp3, to facilitate the teacher's collection and identification.


Teacher's feedback on my work

The teacher specifically mentioned that the sound volume of the cooking scene in my work was too loud and sounded somewhat abrupt, requiring readjustment.

Reflection

In this class, the teacher offered very specific suggestions on my sound design work, especially pointing out that the volume of the "cooking" sound was too high, making it sound jarring. Through the teacher's feedback, I realized the importance of controlling volume levels and rhythmic variations in sound design. Sound should not only be realistic, but also feel natural and comfortable to the listener.

I reflected on my work, realizing that I focused too much on the detailed texture of the sound and neglected the overall dynamic balance. In the future, I will pay more attention to volume transitions during mixing, such as using fade-in/fade-out or gradual changes to make the sound transitions smoother. I will also listen repeatedly, considering the audience's perspective to see if there are any abrupt or distracting parts.

This feedback gave me a deeper understanding that sound design is not just about layering materials, but also an art of spatial storytelling. I will continue to improve the rhythm and spatial layering of my work, allowing the sound to better serve emotional expression and the story's atmosphere.

WEEK 8

The teacher said there are no classes today. We'll start preparing for our next project—an audio story creation project. We can finish the story narration and dialogue this week so we can discuss it next week. We were also reminded that we can look at examples and blogs from older students on the MyTimes module page for reference. The teacher will upload technical tutorial videos, which will replace today's lesson.


WEEK 9



This week's Sonic Design course focused on the production process of sound cleanup, noise reduction, breath sound processing, dynamic control, and sound consistency. The instructor guided us step-by-step through the logic of professional audio post-production and demonstrated how to start from the original recording and gradually cleanse it into a clean audio track suitable for storytelling and soundscape design.

The teacher first reminded us:

Task 3 due next week, so we must start recording and cleaning up our audio this week.

This project is simpler than task 2, but the standards are still strict:

- You must use your own voice (no AI voice allowed)

- You must include necessary sound effects, background noise, and music

- Storytelling, sound effects usage, and overall sound quality will all affect the grade

- Weighting of each part:

Storytelling: 25%

Sound Effect: 15%

Ambience & Music: 10%

Voice clarity & consistency: 50%

Recording Environment Tips: How to Reduce Noise?

The instructor shared many practical methods:

The most effective method is not buying equipment, but rather "shrinking the space and surrounding it with sound-absorbing materials."

For example, use blankets, pillows, and thick cotton covers to create a temporary recording studio.

Recording with a mobile phone is also possible, but the environment must be quiet enough.

The small "vocal box" commonly used by online influencers may not be effective.

It can only reduce ambient noise slightly, and has almost no effect in noisy rooms.

Key principle:

The smaller the space and the more it is surrounded by soft materials, the lower the noise level.

Noise Reduction Techniques

The instructor demonstrates the audition process:

First, create a noise print using pure ambient sound.

Then, use noise reduction to minimize continuous noise (such as air conditioner noise).

Note:

Excessive noise reduction can degrade sound quality.

Listen repeatedly, don't just look at the waveform.

Remove Anomalies

Examples:

  • Breathing sounds
  • Lip noises
  • Pop noises
  • Temporary background noises (e.g., chair sounds)

Processing methods include:

-Manually selecting and reducing

-Or using Auto Gate for automated processing

Breathing Sound Processing (Auto Gate)

The logic of Auto Gate is:

  • "When the sound is loud enough (speaking), the gate opens → sound passes through."
  • "When the sound is too soft (breathing), the gate closes → sound is blocked."

The teacher emphasizes:

The Gate setting needs to be carefully adjusted.

Too strong will make your speech intermittent.

Threshold, Attack, and Release all need to be finely adjusted.

Voice Consistency (Compression + Manual Leveling)

Since natural speech varies in volume—our teacher taught us to use:

✔ Manually adjust the volume (waveform adjustment)

✔ Compressor

To make the overall sound:
  • Consistent
  • Unobtrusive
  • More suitable for storytelling
After compression, use Make-up Gain to restore the overall volume.

EQ (Equalizer): Shapes the texture of the sound

  • Low frequencies: Thickness, stability
  • Mid frequencies: The main body of vocals
  • Treble / High frequencies: Clarity, sibilance (S)
The teacher also demonstrated:

➤ De-esser: Reduces overly strong "S" sounds

But: Don't use too much, otherwise it will make the sound strange and unclear.

The final limiter

  • is the last step in the entire audio production process
  • ensuring the audio doesn't clip
  • and making the overall volume more consistent and suitable for export.

At the end of the class, I showed the teacher my solution, and the teacher gave the following suggestions.

When reviewing my classroom practice, the teacher first affirmed that I had mastered the basic sound cleanup process, but also pointed out that I still needed to improve on the details. The teacher mentioned that my EQ adjustments made the sound slightly "muffled," possibly due to the headphones' heavy bass, reminding me to rely more on my actual auditory perception when judging timbre, rather than just looking at the waveform. The teacher also noticed that several noises and unwanted sounds in the recording were not fully processed, especially some noticeable noises that the Auto Gate couldn't detect, requiring me to manually select and reduce them. Additionally, the handling of breathing sounds wasn't refined enough; some breathing sounds were even louder than the speech itself, requiring me to readjust the Gate parameters or manually reduce them. Regarding the overall volume balance, the teacher suggested using more stable compression to make the sound more uniform and natural, avoiding sudden spikes in the waveform. Despite this, the teacher could see that I was making an effort, and that the direction was correct. With continued optimization in EQ, noise reduction details, breathing control, and volume consistency, the final product will be more professional, clear, and clean.

Before EQ equalizer modification


After the EQ equalizer was changed


Audio of class exercises


Source files for class exercises


Reflection

This week's audio processing course gave me a profound understanding that "clean sound" isn't just about recording; it's the result of accumulating many details. For example, professional steps like noise cleanup, automated breathing sounds, dynamic compression, and equalization are all designed to make the story heard more clearly. The instructor emphasized during demonstrations that "the ear is more important than the eye," urging us not to focus solely on waveform changes but to cultivate the ability to judge the quality of sound.

Finally, the instructor's feedback on my practice made me realize:

Sound clarity is more important than I thought.

Small noises must be thoroughly addressed.

EQ adjustments shouldn't be made haphazardly, especially when using low-frequency headphones; be particularly careful to avoid misjudging the settings.

These suggestions will be extremely helpful for my future sound storytelling projects.

Comments