PodcastsNyhederHacker Public Radio

Hacker Public Radio

Hacker Public Radio
Hacker Public Radio
Seneste episode

260 episoder

  • Hacker Public Radio

    HPR4640: Robert A. Heinlein

    15.05.2026
    This show has been flagged as Clean by the host.

    Robert A. Heinlein

    Robert A. Heinlein was the author who many people claim kicked off the Golden Age, though that can be the subject of many a barroom argument. E.E. “Doc” Smith was already an established writer by this time, and A.E. van Vogt was contemporaneous with Heinlein. But Heinlein managed to outshine everyone in very short order. He was widely known as “The Dean of Science Fiction Writers,” which testifies to his stature in the community, and along with Arthur C. Clarke and Isaac Asimov he was one of the Big Three of the Golden Age. He was the first person to be named a Science Fiction Grand Master in 1974. Four of his novels won Hugo Awards (Double Star, Starship Troopers, Stranger in a Strange Land, and The Moon is a Harsh Mistress), and 7 more works were given Retro-Hugo awards, which are awarded for works that were written before the Hugos were established. He also had many more works nominated for both awards, as well as many other awards like Nebula Awards. In short, he was a big deal to the science fiction community at large, and to me personally. I was, for a short time, managing the web site for The Heinlein Society, and I have read every work of his that I am aware of.

    Heinlein Background

    Robert Anson Heinlein was born in 1907 in Butler, Missouri, and grew up in Kansas City, Missouri, which he described as the middle of the Bible Belt, and this background is reflected in some of his stories, particularly the later ones. His family tradition had it that the Heinlein’s had fought in every American war beginning with the War of Independence, and Robert and his brothers all joined the armed forces. Robert lied about his age when he was 16 in order to enlist in the Missouri National Guard, and a few years later obtained an appointment to the Naval Academy, graduating in 1929 with the equivalent of a bachelors degree in engineering (the Naval Academy did not award degrees at the time). His engineering background is very apparent in his writings. He served on several ships, rising to the rank of Lieutenant, before being discharged in 1934 due to pulmonary tuberculosis. It seems likely that if he did not contract this illness he would have continued his career in the Navy, and with World War II coming, well, who knows what might have happened. But he did get ill, and had to find things to do. He notably got involved with Upton Sinclair’s socialist organization EPIC (End Poverty in California). He ran for office unsuccessfully, running as a left-Democrat in a conservative district. And while he had a disability pension from the Navy, he turned to writing to pay off his mortgage.

    Heinlein’s Writing

    Heinlein was originally known as a “hard” science fiction writer, meaning one who puts plausible and accurate science at the heart of the story. But looking at his entire career, he was equally comfortable writing fantasy, though not the faux medieval kind that many writers. In fact, he coined the term “speculative fiction” to describe the kind of stories he wrote. And if he wanted to he was quite capable of mixing the hard science and the fantasy, particularly in his later novels. And his output was very substantial. Asimov wrote more than Heinlein, but Heinlein stuck to fiction, while Asimov wrote in a variety of fields, so Heinlein’s output in the general area of science fiction/fantasy is the greater. And he is known for works of all lengths from short stories to novels. A useful guide to his works is the book Robert A. Heinlein: A Reader’s Companion, by James Gifford. This book covers all of his science fiction/fantasy works known as of 2000, and gives additional information about the writing and circumstances of the stories. But in 2003 an early work was discovered and published. It was a novel called For Us The Living, and while you can see the germ of Heinlein’s style in this novel, it is also a very early work written in 1938 and is not one of his best. He would get a lot better than this. In any case, it was not published at the time, and is mostly of interest to Heinlein superfans or scholars.

    Heinlein got his real start in 1939 with a short story called Life-line, which was published in John W. Campbell’s Astounding magazine. Isaac Asimov had published a few stories by this time, and his first for John W. Campbell’s Astounding was in the previous month, July 1939, so as you can see this was a very fertile time in the development of the genre. Heinlein’s story was about a scientist who developed a technology to predict a person’s time of death. This totally threatens the insurance industry, and one of the CEO’s put out hit on the scientist, which he of course already knows about having tested himself. This is not the best short story, but it was quite competent, and John W. Campbell immediately asked for more.

    More short stories followed. In the November 1939 issue of Astounding the story Misfit appeared. It introduces the character of Andrew Jackson “Slipstick” Libby, a young man with little education but a great ability to do mathematics in his head. And his ability turns out to be just what is needed during a construction project in space when things go wrong. And in 1940 he had 9 more stories published. And at this point he faced a problem. He was becoming so prolific that for a number of reasons he had to employ pseudonyms for some of his stories. One reason was that he couldn’t have too many stories in one magazine in his name, it made the editor look bad. In any case all of the stories are now published under Heinlein’s name. And of the 9 stories, 6 were either nominated for or won Retro Hugo awards, and several also won Prometheus Hall of Fame Awards, for the best libertarian or anti-authoritarian works. So you can see that his was a talent that exploded on the scene, so that you could legitimately divide the science fiction history into pre-Heinlein and post-Heinlein periods.

    11 more stories of various lengths followed in 1941, and 5 in 1942. There were mostly short stories, but a few novellas and novelettes appeared. But he was really a short fiction writer at this time, and there are some extraordinary stories in this group. He was the most successful writer of speculative fiction of the time, and passed along some advice to anyone who wanted to be a successful writer.

    Heinlein’s Rules of Writing

    Because he was so successful, it should come as no surprise that aspiring writers frequently wrote to him for advice, and in response he formulated his Rules of Writing. This is taken from his On The Writing of Speculative Fiction :

    You must write.

    Finish what you start.

    You must refrain from rewriting, except to editorial order.

    You must put your story on the market.

    You must keep it on the market until it has sold.

    He goes on to say in this article : “The above five rules really have more to do with how to write speculative fiction than anything said above them. But they are amazingly hard to follow—which is why there are so few professional writers and so many aspirants, and which is why I am not afraid to give away the racket!”

    This is very good advice, but as Heinlein points out his rules are indeed hard to follow. For example, Rule #1: You must write. Many people want to be a writer, but not as many really want to write, and there is a very distinct difference. Just as many people want to be a rock star, but don’t want to spend years dead broke playing in dive bars to get there.

    But it is also fair to point out that Heinlein was a rare talent, and I doubt if simply following his rules would make anyone else a similar success. They are good rules, no doubt, but Heinlein was already very familiar with and well-read in the field before he started writing.

    That finishes this particular exploration of where Heinlein came from and how be began his career. And since it all started with short fiction, I next want to focus on that. beginning with his Future History.

    This starts our look at the works of Robert A. Heinlein, the third of the Big Three authors of the Golden Age.

    Links:

    https://en.wikipedia.org/wiki/Robert_A._Heinlein

    https://www.amazon.com/Robert-Heinlein-Readers-Companion/dp/0967987407

    https://www.amazon.com/Us-Living-Comedy-Customs/dp/074325998X/ref=tmm_hrd_swatch_0

    https://en.wikipedia.org/wiki/On_the_Writing_of_Speculative_Fiction

    https://www.palain.com/science-fiction/the-golden-age/robert-a-heinlein/

    Provide feedback on this episode.
  • Hacker Public Radio

    HPR4639: NLUUG Spring Conference 2026

    14.05.2026
    This show has been flagged as Clean by the host.

    NLUUG Spring Conference 2026



    "NLUUG is the association of (professional) Open Source and Open
    Standards users in the Netherlands" You can follow them on
    @[email protected]
    on Mastodon.


    I was particularly interested to attend their
    2026 Spring Conference 2026
    as our own
    Jeroen Baten
    was giving a talk on "Getting started with CI/CD using
    Forgejo
    Actions and why this is important AF"


    He assures me he will post it as a show.
    cough
    owes me a show
    cough
    .


    While there the urge to record came upon me, so I was able to snag
    a few interviews.


    Ronny Lam representing NLUUG


    NLUUG is the association for (professional) developers,
    administrators and users of UNIX/Linux, Open Source, Open Source,
    Open Systems and Open Standards in the Netherlands. The NLUUG
    community includes, system administrators, programmers and network
    specialists.

    If you are working as an open professional, then NLUUG is the
    excellent association where you can keep track of your technical
    knowledge, for example during our six-monthly conferences. The aim
    of NLUUG is to disseminate the application and knowledge of open
    standards and UNIX/Linux.

    NLUUG maintains close ties with many organizations and individuals
    who pursue the open mind.


















    https://nluug.nl/organisatie/personen/ronny-lam/




    https://nl.wikipedia.org/wiki/NLUUG




    https://nluug.nl/




    Nico Rikken representing the FSFE


    The Free Software Foundation Europe is a charity that empowers
    users to control technology. Software is deeply involved in all
    aspects of our lives. Free Software gives everybody the rights to
    use, understand, adapt, and share software. These rights help
    support other fundamental rights like freedom of speech, freedom
    of press, and privacy.
    Learn more


    While we are no strangers to chatting with the
    Free Software Foundation Europe
    (
    hpr857
    ,
    hpr1957
    ,
    hpr2223
    ,
    hpr2945
    ,
    hpr2946
    ,
    hpr3388
    ,
    hpr3407
    ,
    hpr3833
    ), this was the first time we had a chance to interview
    Nico Rikken
    .


    We chat about freedom and
    Ada and Zangemann - A Tale of Software, Skateboards, and
    Raspberry Ice Cream
    by
    Matthias Kirschner
    and
    Sandra Brandstätter
    .











    Geert-Jan Meewisse representing Coalition for Fair Digital
    Education


    The Coalition for Fair Digital Education (CEDO) is a group of
    concerned parents, IT professionals, teachers, and privacy
    advocates committed to enabling fair and sovereign digital
    education. The coalition operates as a working group within
    Internet Society Netherlands (ISOC). We have drafted a manifesto
    calling for improvements in digital education.

    Today, children in education receive an online account from a
    foreign Big Tech company at an early age. Through this account,
    data can be collected, profiles can be built, and personal
    information can be used and exploited by these companies. This
    profiling leads to children being categorized and receiving
    tailored content that companies deem relevant—before they even
    discover things for themselves. And that’s not the only issue.
    Since schools exclusively use “standard” Big Tech solutions,
    children do not learn about alternative programs or tools. As a
    result, real digital skills and critical thinking are not
    developed, making children dependent on a company that profits
    from their data. The privacy and sovereignty of digital education
    are under severe pressure, affecting not only students but also
    teachers and parents, who are forced to use the same systems.
    Other countries are already ahead in this regard: in Denmark,
    Google products have been banned in schools in Helsingør
    municipality, and the German state of Baden-Württemberg has
    prohibited Microsoft 365.

    We advocate for the development of an open-source digital
    infrastructure for learning and educational tools, based on public
    values such as autonomy, equality, sovereignty, democracy,
    transparency, accessibility, academic freedom, and
    privacy-by-design.

    To achieve this, raising awareness among students, parents,
    teachers, and school boards is crucial. Additionally, we aim to
    involve policymakers by presenting our manifesto.


    https://eerlijkdigitaalonderwijs.nl/english/



    A working group of the
    Internet Society
    , Geert-Jan was here to tell us of their work to build a FLOSS
    alternative for Education.


    You can get in touch with him at
    gj -at- eerlijkdigitaalonderwijs .nl
    , or
    @geert-jan:matrix.org




















    Conclusion


    I had great conversations with the sponsors who were a little shy
    about doing an interview. They do have a range of jobs available
    for those of us with Dutch nationality, and have lived in the
    Netherlands for the last 10 years.


    The event was fantastic, professional, held in a great venue, and
    the closest thing to real life
    xkcd: Shibboleet
    as you are likely to get.


    I would like to thank the NLUUG team, volunteers, venue staff and
    of course the attendees for a wonderful day. With any luck this
    will not be the last time you hear about this team on HPR.


    The recordings will be available on the
    NLUUG FTP Server

    Provide feedback on this episode.
  • Hacker Public Radio

    HPR4638: Simple Podcasting - Episode 3 - Analyzing and Filtering

    13.05.2026
    This show has been flagged as Clean by the host.

    01



    This is the third in a four part series on simple podcasting.







    02



    In this episode we will cover the following topics:



    Analysis of audio noise problems and filtering methods used to deal with specific problems that we may find.



    Command line recording.



    Command line playback.



    Getting information about an audio recording.







    03 Introduction



    When I did my first couple of podcasts I didn't notice that there was a quiet high pitched whine or buzz in the background.



    Nobody complained about it, but I thought I could do better in subsequent episodes.







    04 Creating an Audio Sample



    If you have a similar problem, the first step is to find out where it is coming from.



    If there is no audible noise where you are recording, there is a good chance the problem is in the microphone or another part of the audio system.



    Plug in your microphone and record 2 or 3 seconds of quiet audio where you do not speak into the microphone or make other noise.







    05



    You will need a minimum amount of data in order to analyze it.



    For a flac file sampled at 44.1 kHz, 2 to 3 seconds of data should be enough.



    To get a sample of just electronic noise you can put the microphone in a drawer or somewhere like that if you want to be sure of getting a quiet signal.



    Any sound recorded in this way should be mainly from the microphone or other electronic elements in the analogue pathway.



    To get a sample of possible ambient noise, such as fans, make sure the microphone is in the open air in an area which is representative of where it will be when you are recording.







    --------------------







    06 Analyzing using Fourier Transforms



    Next you need to look at the wave form.



    At this point I will describe this using Audacity.



    I will show other ways later, but Audacity is actually the easiest if you are starting from nothing.



    You don't need to become an expert in Audacity to use it, just follow the steps I will describe.



    I myself don't know how to use Audacity beyond using this one feature.







    07



    We are going to analyze the sound spectrum in our sample.



    The technique being used is a Fourier Transform.



    A Fourier transform, often called an "FFT" for fast fourier transform, is a mathematical method of showing a signal in terms of frequency along the x axis instead of time.



    This allows us to spot troublesome noise frequencies which appear when we don't want them to.



    The FFT is a very common mathematical technique which is widely used in signal processing, not just in audio.







    08



    There is software which will create pretty coloured animations of sound waves, but this is not what you want. These are simply decorative patterns and won't tell us what we want to know.







    --------------------







    09 Using Audacity



    Install Audacity if you haven't already.







    Start Audacity.



    Select file > import > audio,



    then navigate to your sample and select "open".



    The file should load.







    10



    In the wave form part of the window, click anywhere and then type Ctrl-S to select all data points.



    The chart should turn a slightly darker colour.



    From the menu, select Analyze > Plot Spectrum.



    A new window will open, showing magnitude in db on the Y axis, and frequency in hertz on the x axis.



    For "algorithm" be sure it is set to "spectrum"







    11



    There are now two settings that we need to play with while we look for problems.







    One is "size"



    The default for this is 1024.



    The other is "axis".



    The default for this is "log frequency".







    --------------------







    12 What to Look For



    What we are looking for are large obvious spikes that stand out in the data.



    Since our test signal has very little to no actual audio data, any spikes should represent electrical or other noise that doesn't belong there.







    13



    I have found two combinations of settings to be most helpful in finding problems.



    These are



    Size 2048, axis linear frequency.



    Size 32768, axis log frequency.







    14



    A small size value can help very narrow spikes stand out from the background more, while a large size value can help separate spikes from surrounding noise.







    A linear frequency axis can help with seeing all spikes across the full frequency range, while a log frequency axis can help to better see what is happening in the often very crowded lowest frequency range.







    --------------------







    15 A Real Example of an Audio Problem



    If you have good audio equipment you may find nothing obvious. If you cannot hear any noise in the signal, there may be none of any consequence and there is nothing for you to do.







    16



    However, in my case I found two main problems and one lesser one.



    One problem was a spike at 60 Hz, which is the AC line frequency.



    There is also a lesser problem of a collection of a broad frequency range of noise below 60Hz.



    Both of these however will be taken care of by the basic filtering that we looked at earlier so we do not need to worry about them here.







    17



    The other main problem is I had a large spike at every 1 kHz interval from 1 kHz to 19 KHz.



    This was noise generated within the head set electronics, or the result of noise on the USB power supply.



    This is the product of a cheap headset.







    18



    These spikes are not very large compared to the volume of my voice, but if I do the same sort of analysis of samples where I am speaking, they appear in the intervals between words.



    This results in a high pitched whine or buzz.



    This was the source of the background noise or buzz in my first two podcast episodes.



    I need to get rid of this.







    19



    One option would be to get a better microphone, but, well, that wouldn't be any fun would it. It would also cost money and I don't want to spend any of that if I don't have to.







    If you analyze your own signal, you may find a different pattern, or even no noise at all.







    If you did not find anything when shielding your microphone from ambient audio noise, repeat the same test but with the microphone exposed to acoustic noise in the room.







    --------------------







    20 Advanced Filtering







    The next step is to figure out how to get rid of this noise.







    I have called this section "advanced filtering", but we are actually just making use of a technique that was already covered in basic filtering.







    21



    To deal with the remaining spikes we can use additional "band reject" filters, each of which removes a specific frequency at 1 kHz intervals from 1 kHz to 12 kHz.



    We will use this in combination with the filtering that we have already done previously, so we don't need to worry about anything above 12 kHz as we already remove that with a low pass filter.







    After a small amount of experimenting I came up with the following.







    22



    Because I am applying a total of 16 filters, 4 for basic filtering and 12 to deal with the specific microphone problems that I have, I have broken up the filters into separate strings.



    I then generate the 12 new band reject filters from a template.







    Note that I don't show the "de-esser" filter here.



    I would recommend adding it as a separate step after doing the sort of filtering we are talking about here.







    23



    Rather than reading out multiple lines of bash script, I will post them in the show notes.



    I will give a brief description of them here which you can refer to when reading the show notes.



    The FFMPEG and Sox versions are very similar in concept so I don't need to go over the Sox version in detail. See the show notes for it.











    FFMPEG Version



    Here's the FFMPEG version.







    # The high and low pass filters.



    hlpfil="highpass=f=80, lowpass=f=12000"







    # Band reject filters filter for 60Hz and another for 50Hz.



    linefil="bandreject=f=60:width_type=h:w=20, bandreject=f=50:width_type=h:w=20"







    # Create a series of band reject filters, from 1 kHz to 12 kHz.



    # Change or remove this part if your recording hardware does not require it.



    ftemplate="bandreject=f=%s000:width_type=h:w=100"



    kilospikefil=$( seq 1 12 | xargs printf "$ftemplate," )







    # Using ffmpeg



    ffmpeg -i input.flac -af "$hlpfil, $linefil, $kilospikefil" output.flac







    24



    There are a total of 5 lines of bash script.







    In the first line, we create a string called "hlpfil" which is just the high and low pass filters copied from our previous discussion on basic filtering.







    In the second line, we create a string called "linefil" which is just the simple bandreject filters to cover 50 and 60 hertz AC line noise filters also from basic filtering.







    25



    In the third and fourth lines, we create a string called "kilospikefil" containing the new filters.



    The "f" parameter represents the frequency we are targeting.



    The "w" parameter represents the "width" of the frequency range we are filtering in terms of hertz.



    The filter is applied gradually rather than with a sharp cut-off, so to get more filtering action we need to have larger width. In this case I decided to hammer the spike quite aggressively and so used a relatively wide width of 100 hertz. Testing with a voice file did not show any noticeable distortion, so it's an acceptable solution.







    26



    For this filter we need to create a dozen filter command so we use the shell "seq" command to generate a sequence of numbers from 1 to 12.



    We then pipe that into the xargs command which applies each number to the next command.



    The next command is "printf", which takes the number it gets from xargs and applies it to the "ftemplate" string template in a manner very similar to C programming printf string templates.







    27



    We also have a comma in there to separate each of the individual filters.



    We then surround this with a $ and () so we can run the command and capture the output into a variable.



    Then we call ffmpeg and pass it the filters we created by putting the variable names inside a double quoted string, separated by commas.



    All of this will be in the show notes, so don't worry about trying to get the exact details right now.











    Sox Version



    Here's the Sox version.







    # The high and low pass filters.



    sxhlpfil="highpass 80 lowpass 12000"







    # Band reject filters filter for 60Hz and another for 50Hz.



    sxfilter="$sxhlpfil $sxkilospikefil bandreject 60 20 bandreject 50 20"







    # Create a series of reject filters filters, from 1 kHz to 12 kHz.



    sxftemplate="bandreject %s000 100"



    sxkilospikefil=$( seq 1 12 | xargs printf "$sxftemplate " )







    # Using SOX.



    sox input.flac output.flac $sxhlpfil $sxfilter $sxkilospikefil







    28



    The Sox version is very similar with the exception that the command arguments representing the filters must not be in quoted strings as Sox wants to see them as separate arguments instead of parsing a string.











    --------------------







    29 Confirming the Effect







    If we apply the above filters and look at this headset noise output file in the Audacity spectrum analyzer we will now see that these noise spikes are almost completely gone.







    We can now confirm how well this works by using a test audio file. Any normal short voice audio file will do for this. Just talk into the microphone normally and create a voice sample file that is 5 or 10 seconds long, or whatever you feel comfortable with.







    30



    With the original unfiltered voice audio I can hear a distinct high pitched whine overlaying the voice.



    With the filtered audio that whine or hum is not detectable.







    If we then look at the voice file in the Audacity spectrum analyzer, we can see distinct "notches" at the 50 Hz and 60 Hz frequencies, and at every 1 kHz from 1 kHz to 12 kHz.







    These notches are narrow enough that they won't cause a noticeable problem with voice signals.



    If we apply this filter to voice samples, the buzz or whine is gone and the voice signal sounds fine.







    Despite using a very cheap microphone, I now have acceptable quality audio for a podcast.







    31



    Again I want to emphasize that in this instance I am dealing with deficiencies with my hardware instead of buying a better microphone.



    These additional filters are intended to deal with the specific hardware problem I am facing.



    You don't need these additional filters if you cannot detect an audible problem.



    On the other hand, if you have a different problem you may wish to deal with a different set of frequencies.



    Finding these problems is the reason for using a spectrum analyzer.







    32



    FFMPEG has other filtering methods as well.



    However, as I didn't end up using them I can't really do an adequate job of describing them.



    If anyone has used them successfully, they are welcome to make a podcast on the subject.







    --------------------







    33 Completing the Process



    With these new filters added into the middle of the processing steps, you can now complete the processing by doing the de-essing, normalizing, and review steps as described in the previous episode.







    --------------------







    34 Command Line Recording



    I will now cover a separate topic, which is recording using command line programs.



    I am covering it in this episode as it is a short topic and it is convenient to talk about it here.











    35



    As well as using GUI based recording programs such as Gnome Sound Recorder, it is possible to record podcast episodes using command line tools such as FFMPEG.







    As for why you may wish to use command line tools to record audio, there are several reasons.



    One is that you may simply prefer to do it this way because it pleases you to do so.



    Another is that it allows the recording step to be included in a script that encompasses other parts of the process, automating what may have otherwise been separate manual steps.







    36



    However, if you don't find these arguments particularly compelling, then I'm not going to attempt to persuade you to use the command line to record audio. I am doing this part of this episode out of a desire to have a bit of fun and I probably won't be using it much myself.







    I will however use one of these methods to record this part of this episode.







    37 Recording with FFMPEG - The Basics



    One of the common command line tools you can use is FFMPEG, a package which I have previously mentioned with respect to filtering audio files.







    Here is an example of how to record using FFMPEG. We call FFMPEG specifying the audio input system as the FFMPEG input, and then specify a file to output to.







    38



    # Record audio.



    ffmpeg -f pulse -i default ff.flac







    39



    Press 'q' to stop.







    This uses pulse audio on Linux for input "-f pulse",



    and the default input "-i default".







    However, this does not specify the the sample rate or mono recording. To do that we need to add a few more parameters as in the following







    40



    ffmpeg -f pulse -i default -ac 1 -ar 44100 ff.flac







    41



    "-ac 1" specifies mono output



    "-ar 44100" specifies 44.1 khz bit rate.











    42 Playback with FFMPEG - The Basics







    FFMPEG can also play back music. In this case however we need to call the "ffplay" program rather than FFMPEG itself.







    To play an audio file, simply call ffplay and give it the name of the audio file as an argument to the command.







    For example:







    43



    # Play an audio file.



    ffplay podcast.flac







    44



    We can also call it with the "autoexit" option, which tells ffplay to automatically exit when the audio file has finished playing.







    ffplay -autoexit ff.flac







    45



    -autoexit means Exit when the audio file is done playing.







    46



    To exit in the middle of the recording, press "q' or ESC.



    To pause the playback, press "p" or space bar.



    To decrease the volume press "9" or "/".



    To increase the volume press "0" or "*".







    47



    To seek forward 10 seconds, press the right cursor button.



    To seek backward 10 seconds, press the left cursor button.



    To seek forward 1 minute, press the up cursor button.



    To seek backward 1 minute, press the down cursor button.







    48



    The "0" and "9" keys mentioned above are those on the top row of the keyboard, not the ones on the separate numeric pad.







    49



    While the recording is playing, a graphical window will open which shows a cascading waveform based on the current content. This is purely decorative and does not serve any particularly useful purpose.







    --------------------







    #!/bin/bash







    # Record a podcast episode segment.







    # Get the next file name.



    # First we check if any matching file patterns exist. If they don't,



    # then we create the first one starting counting at 1.



    fcount=$( ls [0-9][0-9].flac 2>/dev/null | wc -l )



    if (( $fcount < 1 )); then



    fname="01.flac"



    else



    # If there are any matching file patterns, we find the highest number



    # and increment it by 1.



    filenum=$( ls [0-9][0-9].flac 2>&1 | cut -d. -f1 | sort | tail -1 )



    newfilecount=$(( 10#$filenum + 1 ))



    fname=$( printf "%02d.flac" $newfilecount )



    fi







    echo "Recording to: $fname"







    # Record using ffmpeg.



    # This makes use of pulse audio and the input is the default audio input.



    # The sample rate is set to 44.1 kHz, and it is recorded as mono (1 channel).



    ffmpeg -f pulse -i default -ar 44100 -ac 1 $fname







    echo "Recorded audio to: $fname"







    # Report on basic information about the audio file that was just recorded.



    ffprobe -hide_banner $fname











    --------------------







    50 Sox - Not so Good



    I did not find the recording or playback features of Sox to be as useful as those of FFMPEG, so I won't bother to cover them here.







    --------------------







    51 Getting Information About an Audio Recording







    There are also command line tools which can be used to retrieve information about audio recordings.







    52 FFMPEG Version







    With FFMPEG this is called "ffprobe". For example:







    53



    ffprobe hpr4566.mp3







    54



    This will print out a lot of information about FFMPEG itself. To skip that use the hide_banner option.







    55



    ffprobe -hide_banner hpr4566.mp3







    56



    This will print out information about the audio recording. This will include things like the duration, bit rate, sample rate, stereo or mono, etc.







    If the author added metadata tags to the file, it will also show those. HPR add things like the title, author, copyright license, comment, etc. You can extract the ones you want using something like grep and cut.







    57 Sox Version







    Sox has a similar feature, called "soxi".







    58



    soxi ff.flac







    59



    However, it may not work on mp3 files if you do not have an mp3 handler for it installed.











    --------------------











    60 Conclusion



    In this episode we took a brief look at an example of how to solve an audio problem through filtering.



    We looked at how to use Audacity to find where the problems were.



    We then looked at how to apply filters to remove these sources of noise.



    We also looked at how to record podcasts and get information about audio files using command line tools.







    61



    In the next episode we will look at alternatives to Audacity for analyzing audio. While Audacity works just fine, this is an opportunity to have a bit fun with some gratuitous hackery.







    62



    This has been the third episode in a four part series on simple podcasting.







    --------------------



    --------------------







    Full Audio Processing Pipeline



    This version includes the special filters used to fix my headset problems.



    Use the version from the previous episode if you do not have the same



    audio hardware problems.











    #!/bin/bash







    # Full processing pipeline for making simple podcasts.







    # ======================================================================



    # Concatenate multiple flac files into a single flac file.



    # This is used to combine podcast recorded segments into a single



    # flac file for uploading to HPR.







    concataudio ()



    {



    outputname="$1"







    # First create the list file.



    printf "file '%s'\n" [0-9][0-9].flac > podseglist.txt







    # Now concatenate them



    ffmpeg -f concat -safe 0 -i podseglist.txt "$outputname"







    rm podseglist.txt



    }







    # ======================================================================







    # Basic and advanced filters.



    filter ()



    {



    inputfile=$1



    outputname=$2







    # Using ffmpeg.







    # The high and low pass filters.



    hlpfil="highpass=f=80, lowpass=f=12000"







    # Band reject filters filter for 60Hz and another for 50Hz.



    linefil="bandreject=f=60:width_type=h:w=20, bandreject=f=50:width_type=h:w=20"







    # Create a series of band reject filters, from 1 kHz to 11 kHz.



    ftemplate="bandreject=f=%s000:width_type=h:w=100"



    kilospikefil=$( seq 1 11 | xargs printf "$ftemplate," )











    # Using ffmpeg



    ffmpeg -i $inputfile -af "$hlpfil, $linefil, $kilospikefil" $outputname



    }







    # ======================================================================







    # De-Essing.



    deessing ()



    {



    inputfile=$1



    outputname=$2



    option=$3







    # De-essing filter.



    ffmpeg -i $inputfile -filter_complex "deesser=i=0.5:m=0.5:f=0.5:s=$option" -b:a 336k -sample_fmt s16 $outputname







    }







    # ======================================================================



    # Normalizing the audio to EBU R128 standard for review using ffmpeg.



    normffmpeg ()



    {



    inputfile=$1



    outputname=$2







    # Normalize to EBU R128 standard.



    ffmpeg -i $inputfile -af loudnorm=I=-17:TP=-2.0:LRA=4.0 -ar 44.1k $outputname







    }







    # ======================================================================







    # Output an MP3 version to help with reviewing.



    mp3convert ()



    {



    inputfile=$1







    # Get the name of the file and then create the output file name.



    j=$( basename $inputfile ".flac" )



    outputname="$j"".mp3"







    # Convert to MP3.



    ffmpeg -i $inputfile $outputname



    }







    # ======================================================================







    # Concatenate the separate audio files.



    concataudio fullpod-unfiltered.flac







    # Basic filtering.



    filter fullpod-unfiltered.flac filtered.flac







    # De-essing. This is the version to send for publishing.



    # The third argument should be "o" for de-essing, or "i" for pass through without de-essing.



    deessing filtered.flac fullpod.flac o







    # Normalized for review.



    normffmpeg fullpod.flac fullpod-norm.flac







    # Output an MP3 copy for review.



    mp3convert fullpod-norm.flac











    --------------------



    --------------------





    Provide feedback on this episode.
  • Hacker Public Radio

    HPR4637: UNIX Curio #6 - at and batch

    12.05.2026
    This show has been flagged as Clean by the host.

    This series is dedicated to exploring little-known—and occasionally useful—trinkets lurking in the dusty corners of UNIX-like operating systems.


    I would imagine that most users of UNIX-like systems have heard of
    cron
    —certainly any system administrator should have. Briefly,
    cron
    is a way of running a job repeatedly based on the time and date; for example, a job could run every hour, at 5:00am every Tuesday, or the 3rd of every month. It is commonly used for administrative or maintenance tasks that should be done on a regular schedule, such as checking for software updates, rotating log files, or updating the database for the
    locate
    command.



    As well-known as
    cron
    is, there is a similar utility that very few seem to be aware of:
    at
    . This is the word "at", and has nothing to do with the at symbol "@". An
    at
    job is very much like a
    cron
    job, except that an
    at
    job only runs one time. A job is submitted by running


    at




    timespec






    1
    , where
    timespec
    is the time and date the job is to be run. The linked POSIX specification page describes acceptable formats for
    timespec
    ; some examples are "
    now
    ", "
    14:00
    ", "
    noon tomorrow
    ", "
    14:00 + 3 months
    ", and "
    14:00 January 19, 2038
    ". The utility then waits on standard input for you to enter a set of commands to be run in the job. You end input by typing
    Control-D
    to mark the end of text. (As an alternative to typing in the job, you could instead use the "<" symbol to redirect standard input to come from a file containing the commands you want to run.)



    When the specified time arrives, the job will be run. That is the theory, anyway, but some things may interfere. The normal configuration for some implementations only checks for due
    at
    jobs every five minutes, so there can be a delay before a job is actually run. Also, if the system isn't running, obviously it can't execute any jobs. When it comes back up, typically it will check for any pending
    at
    jobs that are currently or past due and run those. It is best to think about an
    at
    job being run no earlier than the time it was scheduled for, and probably soon after, provided the system is up. The POSIX standard doesn't specify anything about when jobs are actually run, just that they are
    scheduled
    for a particular date and time.



    The user does
    not
    need to be logged in for a job to run—if the job outputs anything to standard output or standard error, that text will be e-mailed to the user, presuming the system is set up to send mail. This is often true for a server, which might be running a Mail Transfer Agent like
    sendmail
    ,
    postfix
    , or
    exim
    , but many desktops are not. If nothing is output to standard output or standard error, or if that output is redirected to a file, then mail will not be sent on job completion. This behavior can be changed with the
    -m
    option; in that case, mail will always be sent when the job finishes whether or not there is any output.



    The


    batch




    command is very similar


    2
    —POSIX specifies it as being equivalent to
    at now
    with two differences. First, jobs are put into a different queue, and second, mail is always sent when a job completes as if the
    -m
    option was used with
    at
    . In practice, however, certain aspects of the behavior of
    batch
    depend on the implementation.



    On the large majority of systems I investigated
    3,4,5,6,7,8
    , but not all
    9
    ,
    batch
    jobs will only be run when the system load level drops below a certain point. This can typically be configured by the administrator but has a default value—the manual pages for a couple systems don't actually list a default value and just say
    batch
    jobs will run "when system load levels permit". Basing execution on the load level makes sense if the
    batch
    utility is seen as a way of running potentially resource-intensive jobs when the system is not being heavily used. However, this behavior is not required by POSIX.



    Another question that the standard leaves unanswered is how queues behave. From the normal understanding of the word "queue", you might expect that each successive job is run one at a time once the previous job completes. However, this is not stated in POSIX, and some implementations explicitly allow a configurable number of jobs to run simultaneously. Manual pages for other systems simply don't mention the subject. (I researched this episode by looking at documentation for a number of BSD, Linux, and commercial UNIX systems, but didn't actually test out how they behave.) POSIX only requires systems to have two queues, one named "a" for
    at
    jobs and one named "b" for
    batch
    jobs, but allows implementations to have more. It says nothing about how different queues compete for resources—one implementation assigns a higher
    nice
    value to jobs in a queue whose name comes later in the alphabet, giving them a lower priority in the process scheduler.



    So what good are
    at
    and
    batch
    ? While I think they certainly meet the "obscure" requirement for a UNIX Curio, I have to admit they aren't particularly useful today. They were designed for an era where a typical UNIX-like system would run around the clock and had multiple users who might log in at various times of the day but weren't connected 24/7. In that context, using
    batch
    to run a job when the system is lightly loaded might be useful; nowadays, you can just run it whenever you like on your own machine. I have never actually used
    batch
    myself. On a machine where there is serious competition for resources among users,
    batch
    is probably not a sophisticated enough tool to manage their jobs—the NetBSD and Debian manual pages explicitly suggest using something different
    3,6
    . Supercomputing environments have even more complex requirements and a number of specialized solutions exist for scheduling jobs there.



    I
    have
    used
    at
    a couple of times. One example was for an organization I was part of that had paid for its domain name registration several years into the future. On the organization's server, I set an
    at
    job to e-mail the administrator a reminder to renew it a few months before the domain was due to expire. It was useful in that case because I didn't know whether I would even continue to be involved then, so a personal reminder for myself wouldn't necessarily help. But in my experience, administrative tasks don't tend to be one-off events. Instead, they repeat, making
    cron
    the right tool to use. For reminders, a calendar app is probably a better solution in most cases.



    While you might never have a use for
    at
    and
    batch
    , I still think it's good to know that they exist. Just be aware that you'll probably need to read the manual page on your system to fully understand how they will behave.



    References:







    At specification
    https://pubs.opengroup.org/onlinepubs/009695399/utilities/at.html





    Batch specification
    https://pubs.opengroup.org/onlinepubs/009695399/utilities/batch.html





    NetBSD 10.0 at manual page
    https://man.netbsd.org/NetBSD-10.0/at.1





    FreeBSD 15.0 at manual page
    https://man.freebsd.org/cgi/man.cgi?query=at&sektion=1&manpath=FreeBSD+15.0-RELEASE+and+Ports





    OpenBSD 7.8 at manual page
    https://man.openbsd.org/OpenBSD-7.8/at.1





    Debian 13 at manual page
    https://manpages.debian.org/trixie/at/at.1.en.html





    openSUSE 42.3 at manual page
    https://man.freebsd.org/cgi/man.cgi?query=at&sektion=1&manpath=openSUSE+42.3





    HP-UX Reference (11i v3 07/02) - 1 User Commands A-M (vol 1)
    https://support.hpe.com/hpesc/public/docDisplay?docId=c01922490&docLocale=en_US





    OpenSolaris 2010.03 at manual page
    https://man.freebsd.org/cgi/man.cgi?query=at&sektion=1&manpath=OpenSolaris+2010.03







    Apologies for the "tapping" sound that occurs in parts of this episode. I think my microphone must have picked up some electromagnetic interference.


    Provide feedback on this episode.
  • Hacker Public Radio

    HPR4636: 7 seconds memory

    11.05.2026
    This show has been flagged as Explicit by the host.

    There are two themes of the human experience that influence greatly our feelings and our behaviours: the memory, and the pain.



    Today we are going to talk about the first.



    Clive Wearing was a conductor, a musician, that lost a part of his brain. A virus, herpes simplex, that causes fever, in his case trespassed the barrier between blood and brain and caused an inflammation that damaged permanently the hypothalamus, responsible for memory.



    Immediately — after being cured of the infection by antiviral medicine —, he was a man with no memory. He couldn’t recognize his children — who, later, recognized they kind of abandoned the father, ceased the visits to him, because that condition was too sad for them.



    In the first moments, Clive was too angry, “I can’t think” was a constant. “Prisoner of the consciousness”, is the title of the TV documentary produced on him soon after the event.



    His wife — the second, not the mother of his two sons and one daughter — was his fullest “item” of memory — if we could picture memory as drawings in a piece of furniture, what of course is inexact, to say the minimum. He could still know, ever, that she, Deborah, was his wife; and, apart from his own’s, Deborah’s name was the only one he still knew.







    His angriness was surplice by a calm and gentle and gistful personality. Like, apart from the loss of memory, he kept two thirds of this personality: he definitely was Clive. (That observation is from one of his sons, in the documentary made 20 years after the first, called “7 seconds memory”). That is why Deborah, after divorcing him, and couldn’t having find another love (she was searching for Clive in other experiences, which she couldn’t find), later renewed the wedding vows with her husband; even though they couldn’t live together because of his need of constant supervision.



    The doctors — as the 2nd documentary, that is the line for this program, says — could not explain how he became more peaceful. I have a guess. Clive lost memory of events, he could not live in his mind any happenings. He knew her wife was his wife, but had no memory of the wedding; remembered having worked for BBC, but not one thing, not one activity, that he has done or participated. Maybe he have retained a little bit of what we could call (and I lack any technical precision here) descriptive memory. He could retain the old relations of a name with a characteristic, a face with the level of proximity he had with the person, as long as they (these relations) were verbalized in his understanding. Because he could not evoke any fact, he lost the (other term with precision) narrative memory — but words still made sense to him. So, in living the same day every day, with no time, no continuity, maybe some perception could have been engraved in his mind, unconsciously or not, even with the damaged memory, in the direction of going on
    (letting go)
    without despair. This is only a guess.



    Thank you.





    Provide feedback on this episode.
Flere Nyheder podcasts
Om Hacker Public Radio
Hacker Public Radio is an podcast that releases shows every weekday Monday through Friday. Our shows are produced by the community (you) and can be on any topic that are of interest to hackers and hobbyists.
Podcast-websted

Lyt til Hacker Public Radio, B.T. Valgkamp og mange andre podcasts fra hele verden med radio.dk-appen

Hent den gratis radio.dk-app

  • Bogmærke stationer og podcasts
  • Stream via Wi-Fi eller Bluetooth
  • Understøtter Carplay & Android Auto
  • Mange andre app-funktioner