[edit this page]

Variables and Expansions

How do I store and work with data?

Bash parameters and variables; environment variables, special parameters and array parameters; expanding parameters, expansion operators, command substitution and process substitution; pathname expansion, tilde expansion and brace expansion.

[edit this page]
This chapter's content target has been reached. It is pending audit and feedback from readers like you.
If you are interested in getting updates on the progress of this guide, you can star the source repository here.

What is expansion?

We now know how to use bash for writing and managing simple commands. These commands give us access to many powerful operations on our system. We've learned how commands tell bash to execute programs by creating new processes for them. We understand command arguments and how to pass information to our command in order to make them do the things we need done.

And with all of this knowledge, we begin to taste the power of what can be done with the shell. It's almost like we're communicating directly with our system in a brand new language where commands are tasks and arguments are the specific instructions on how those tasks should be performed.

One of the main limitations at this point is the fact that passing information to commands as arguments so explicitly is very limiting. To have to spell out every single file name on which to operate and every bit of data that should be shown on the screen or handled by a program means that we can only write programs when we have a perfect picture of what exactly needs to be done. What we need is a way of making our commands more dynamic, turning them into a sort of template for how to perform actions that we can re-use time and time again.

Say we want to delete the files in our Downloads directory. With the knowledge we've gathered so far, we can look at what files are there and delete them:

$ cd ~/Downloads
$ ls
05 Between Angels and Insects.ogg
07 Wake Up.ogg
$ rm -v '05 Between Angels and Insects.ogg' '07 Wake Up.ogg'
removed '05 Between Angels and Insects.ogg'
removed '07 Wake Up.ogg'
$ ls

Brilliant. Our ls command gives no more output, that means our directory is empty.
But wouldn't it be nice if we didn't have to be quite so explicit about everything? After all, our intention was to empty our Downloads directory. And to do so, we had to manually go there, find out which files were present, and issue an rm command enumerating each filename. Let's improve on this workflow and make our code a little more dynamic. What we want to achieve is a sort of template for doing a job that we can use again and again. A job description that describes a way of executing our intent regardless of the specific situation we are currently in.

To do this, we need to remove all of the situational specifics from our code. In the example above, the situational specifics are the exact names of the files which we are trying to delete. Emptying our downloads won't mean deleting these two files every time we do it. What we want to do, is to delete every file in the Downloads directory, regardless of what their actual names are. The example above only worked because we had an intermediary step where we, a human, had to look at the names of the files in the directory and re-write them as arguments into an rm command. How can we automate this process?

Pathname Expansion

The answer comes in the first of many forms of expansion bash provides us with. Welcome to Pathname Expansion:

$ cd ~/Downloads
$ rm -v *
removed '05 Between Angels and Insects.ogg'
removed '07 Wake Up.ogg'
$ ls

What happened to the filenames we wanted to delete? We've replaced them with a pattern that tells bash to expand the pathnames for us. Expansion is the practice of replacing a part of our command code with a situationally specific piece of code. In this case, we want to replace * with the pathname of every single file in our downloads directory. Replacing patterns with pathnames is therefore known as pathname expansion.

In our example above, bash notices that you have put a pathname pattern on the command line in the place where it would expect to see arguments. It has then taken this pathname pattern, and went looking on the file system for every pathname that it could find which matches this pattern of ours. It so happens, that the pattern * matches the name of every single file in the current directory. As a result, bash replaces the pattern in our command line with the pathname of every single file in the current directory. We don't have to do the work ourselves anymore! Once bash replaces our * with '05 Between Angels and Insects.ogg' '07 Wake Up.ogg', bash proceeds to invoke the rm command with the full set of arguments: -v '05 Between Angels and Insects.ogg' '07 Wake Up.ogg'. As a result, our downloads directory is emptied as intended. Brilliant.

Bash can perform all sorts of pathname expansions for us. To perform a pathname expansion, we simply write a syntactical glob pattern in the place where we want to expand pathnames. A glob is the name of the type of pattern supported by the bash shell. Here are the various basic glob patterns supported by the bash shell:

Glob Meaning
* A star or asterix matches any kind of text, even no text at all.
? A question mark matches any one single character.
[characters] A set of characters within rectangular braces matches a single character, only if it's in the given set.
[:classname:] When there is a set of colons directly inside the rectangular braces, you can specify the name of a class of characters instead of having to enumerate each character yourself.
Bash knows about various kinds of character classes. For example, if you use the [:alnum:] pattern, bash will match it against a character only if it is alphanumeric. Supported character classes include:
alnum, alpha, ascii, blank, cntrl, digit, graph, lower, print, punct, space, upper, word, xdigit

We can combine these glob patterns together to describe all sorts of pathname combinations. We can also combine it with literal characters to tell bash that part of the pattern should include exact text:

$ lsWithout arguments, ls simply lists the full contents of a directory.
05 Between Angels and Insects.ogg
07 Wake Up.ogg
$ ls *While the effect is the same, this command actually enumerates every single file
myscript.txtin the directory to the ls in its arguments!
05 Between Angels and Insects.ogg
07 Wake Up.ogg
$ ls *.txtWhen we include the literal string .txt, the only pathnames that still match the pattern
myscript.txtare those that start with any kind of text and end with the literal string .txt.
$ ls 0?' '*.oggHere we're combining patterns, looking for any pathname start starts with a 0,
05 Between Angels and Insects.oggfollowed by any single character, followed by a literal space, ending in .ogg.
07 Wake Up.ogg
$ ls [0-9]*In a character set, we can use - to indicate a range of characters.  This will match
05 Between Angels and Insects.ogga pathname starting with one character between 0 and 9 followed by any other text.
07 Wake Up.ogg
$ ls [:digit:][:digit:]*Character classes are really nice because they speak for us: they tell us exactly
05 Between Angels and Insects.oggwhat our intent here is.  We want any pathname that start with two digits.
07 Wake Up.ogg
$ ls [:digit:][:digit:]Your pattern needs to be complete!  None of our filenames is only just a single digit.

It is also important to understand that these globs will never jump into subdirectories. They only match against file names in their own directory. If we want a glob to go looking at the pathnames in a different directory, we need to explicitly tell it with a literal pathname:

$ ls ~/Downloads/*.txtEnumerate all pathnames in ~/Downloads that end with .txt.
$ ls ~/*/hello.txtGlobs can even search through many directories!  Here bash will search
/Users/lhunath/Documents/hello.txtthrough all directories in our home directory for a file that's called hello.txt.

Pathname expansion is an incredibly powerful tool to avoid having to specify exact pathnames in our arguments, or to go looking through our file system for the files we need.

Finally, bash has also built support in for more advanced glob patterns. These globs are called: extended globs. By default, support for them is disabled, but we can easily enable it in our current shell with the command:

$ shopt -s extglob

Once extended globs are enabled, the above table of glob pattern operators is extended with the following additional operators:

Extended Glob Meaning
+(pattern[ | pattern ... ]) Matches when any of the patterns in the list appears, once or many times over. Reads: at least one of ....
*(pattern[ | pattern ... ]) Matches when any of the patterns in the list appears, once, not at all, or many times over. Reads: however many of ....
?(pattern[ | pattern ... ]) Matches when any of the patterns in the list appears, once or not at all. Reads: maybe one of ....
@(pattern[ | pattern ... ]) Matches when any of the patterns in the list appears just once. Reads: one of ....
!(pattern[ | pattern ... ]) Matches only when none of the patterns in the list appear. Reads: none of ....

These operators are at first a little more confusing to understand, but they are a great way of adding logic to patterns:

$ ls +([:digit:])' '*.oggFilenames that start with one or more digits.
05 Between Angels and Insects.ogg
07 Wake Up.ogg
$ ls *.jp?(e)gFilenames that end either in .jpg or .jpeg.
$ ls *.@(jpg|jpeg)Same thing, perhaps written more clearly!
$ ls !(my*).txtAll the .txt files that do not begin with my.
$ ls !(my)*.txtCan you guess why this one matches myscript.txt?

Extended glob patterns can be extremely useful at times, but they can also be confusing and misleading. Let's focus on the last example: why does !(my)*.txt expand the pathname myscript.txt? Isn't !(my) supposed to only match when the pathname does not have a my in this position? You are correct, it is! And yet, bash expands a pathname that begins with my!

The answer here is that bash will happily match this part of the pattern against the m at the beginning (which is not the same as my) or even empty space at the start of the filename. This means that in order for the pathname to still be eligible for expansion, the rest of our pattern needs to match against it the remaineder of our pathname. And it so happens that we have a * glob right after the !(my) glob which will happily match the entirety of the filename. In this situation, the !(my) part matches against the m character in the beginning of the name, the * matches against the yscript part, and the .txt suffix of the pattern matches against the trailing .txt of our pathname. The pattern matches the name, so the name is expanded! When we include the * inside the !() pattern, this no longer works and the match fails against this pathname:

$ ls !(my)*.txt
$ ls !(my*).txt

Tilde Expansion

There is a different kind of expansion which we have been silently using in this guide without explicitly explaining it. It's called the Tilde Expansion and it involves replacing a tilde (~) in a pathname with the path to the current user's home directory:

$ echo 'I live in: ' ~Note that expansions must not be quoted or they will become literal!
I live in: /Users/lhunath

Tilde expansion is slightly special in bash, compared to pathname expansion, since it happens especially early in the parser phase. This is merely a detail, but it is important to note that tilde expansion is different from pathname expansion. We are not performing a search and trying to match filenames against glob patterns. We are simply replacing a tilde with an explicit pathname.

In addition to simple tildes, we can also expand the home directory of another user by putting the user's name right after the tilde:

$ echo 'My boss lives in: ' ~root
My boss lives in: /var/root

Command Substitution

We now have a pretty good idea of what expansion means: we replace a syntactical token in our command by the situationally-specific equivalent value of that token. Thus far, we've only expanded pathnames, either as the result of a pathname expansion pattern or a tilde expansion operation.

But expansion can be used for so much more. We can use expansion to expand almost any kind of data into our command's arguments. Command Substitution is an extremely popular method of expanding data into command arguments. With Command Substitution, we effectively write a command within a command, and we ask bash to expand the inner command into its output and use that output as argument data for the main command:

$ echo 'Hello world.' > hello.txt
$ cat hello.txt
Hello world.
$ echo "The file <hello.txt> contains: $(cat hello.txt)"
The file <hello.txt> contains: Hello world.

What have we done here?
We start out pretty simple: we create a file called hello.txt and put the string Hello world. into it. We then use the cat command to output the contents of the file. We can see the file contains the string we saved into it.

But then things get interesting: what we want to do here is output a message to the user that explains in a nice sentence what the string in our file is. To do this, we want to make the file's contents a "part of" the sentence that we echo out. However, while we write the code for this sentence, there is no telling what the contents of the file is, so how can we type out the correct sentence in our script? The answer is expansion: We know how to get the contents of a file using cat, so here we expand the output of the cat command into our echo sentence. Bash will first run cat hello.txt, take the output of this command (which is our string Hello world.) and then expand our Command Substitution syntax (the $(cat ...) part) into that output. Only after this expansion, bash will try to run the echo command. And can you guess what the argument to the echo command has become after our Command Substitution has expanded in-place? The answer is:
echo "The file <hello.txt> contains: Hello world."

This is the very first kind of value expansion that we've learned about. Value expansions allow us to expand data into command arguments. They are extremely useful and you will use them all the time. Bash has a fairly consistent syntax with regards to value expansions: they all start with a $ symbol.

Command Substitution essentially expands the value of a command that was executed in a subshell. As such, the syntax is a combination of the value-expansion prefix $ followed by the subshell to expand: (...). A subshell is essentially a small new bash process that is used to run a command while the main bash shell waits for the result. We'll learn more about subshells in a future chapter. Suffice it to say that the syntax for expansions in bash is very consistent and very deliberate, which certainly helps with learning it!

As a closing note, I will make brief mention of the deprecated `...` syntax. Old-style bourne shells used this syntax for Command Substitution instead of the more modern $(...) syntax. In bash and all modern POSIX shells, both syntaxes are supported, but it is highly recommended that you stop using the backtick (`) syntax and change this syntax into the value-expansion equivalent whenever you see it used in the wild. Although they are functionally equivalent, the backtick variant has some very important downsides:

How do I store and re-use data?

We now know how to use bash for writing and managing simple commands. These commands give us access to many powerful operations on our system. We've learned how commands tell bash to execute programs by creating new processes for them. We've even learned to manipulate the basic input and output of these processes such that we can read from and write to arbitrary files.

Those of you that have been paying really close attention will even have spotted how we can pass arbitrary data into processes using constructs such as here-documents and here-strings.

The biggest limitation now is our inability to handle data flexibly. We can write it out to files and then read it in again, by employing many file redirections, and we can pass in static pre-defined data using here-documents and here-strings. But this leaves us longing for more.

High time to unlock the next level of wonders: bash parameters.

What are bash parameters?

Simply put, bash parameters are regions in memory where you can temporarily store some information for later use.

Not unlike files, we write to these parameters and read from them when we need to retrieve the information later. But since we're using the system's memory and not the disk to write this information to, access is much faster. Using parameters is also much easier and the syntax more powerful than redirecting input and output to and from files.

Bash provides a few different types of parameters: positional parameters, special parameters and shell variables. The latter are the most interesting type, the former two mainly give us access to certain information bash makes available to us. We'll introduce the practical aspects and usage of parameters through variables and then explain how positional and special parameters are different.

Shell Variables

A shell variable is essentially a bash parameter that has a name. You can use variables to store a value and later modify or read that value back for re-use.

Using variables is easy. You store information in them through variable assignment, and access that information at any later time using parameter expansion:

$ name=lhunathAssign the value lhunath to the variable name
$ echo "Hello, $name.  How are you?"Expand the value of name into the echo argument
Hello, lhunath.  How are you?

As you can see, the assignment creates a variable called name and puts a value in it. Expansion of the parameter's value is done by prefixing the name with a $ symbol, which causes our value to get injected into the echo argument.


Assignment uses the = operator. It is imperative that you understand there can be no syntactical space around the operator. While other languages may permit this, bash does not. Remember from the previous chapter that spaces in bash have a special meaning: they split commands into arguments. If we were to put spaces around the = operator, they would cause bash to split the command into a command name and arguments, thinking you wanted to execute a program rather than assign a variable value:

$ name = lhunathRun the command name with arguments = and lhunath.
-bash: name: command not found

To fix this code, we simply remove the space around the = operator that was causing the word splitting. If we wanted to assign a value to the variable which begins with a few literal space characters, we'll need to use quotes to signal bash that our space is literal and shouldn't serve to trigger word splitting:

$ name=lhunath
$ item='    4. Milk'Use quotes to make the spaces literal.

We can even combine this assignment syntax with other value expansions:

$ contents="$(cat hello.txt)"

Here, we perform a Command Substitution, expanding the contents of the hello.txt file into our assignment syntax, which subsequently results in that contents getting assigned to the contents variable.

Parameter Expansion

Assigning values to variables is neat but not really immediately useful. It's being able to re-use those values at any time that makes parameters so interesting. Re-using parameter values is done by expanding them. Parameter Expansion effectively takes the data out of your parameter and inlines it into the data of your command. As we saw briefly before, we expand parameters by prefixing their name with a $ symbol. Whenever you see this symbol in bash, it's probably because something is getting expanded. It could be a parameter, or the output of a command, or the result of an arithmetic operation. We'll learn more about the other expansions later on.

In addition, parameter expansion allows you to wrap curly braces ({ and }) around your expansion. These braces are used to tell bash what the beginning and end of your parameter name is. They are usually optional, as bash can often figure the name out by itself. Though sometimes they become a necessity:

$ name=Britta time=23.73We want to expand time and add an s for seconds
$ echo "$name's current record is $times."but bash mistakes the name for times which holds nothing
Britta's current record is .
$ echo "$name's current record is ${time}s."Braces explicitly tell bash where the name ends
Britta's current record is 23.73s.

Parameter expansions are great for inserting user or program data into our command instructions, but they also have an extra ace up their sleeve: parameter expansion operators. While expanding a parameter, it is possible to apply an operator to the expanding value. This operator can modify the value in one of many useful ways. Remember that this operator only changes the value that is expanded; it does not change the original value that's sitting in your variable.

$ name=Britta time=23.73
$ echo "$name's current record is ${time%.*} seconds and ${time#*.} hundredths."
Britta's current record is 23 seconds and 73 hundredths.
$ echo "PATH currently contains: ${PATH//:/, }"
PATH currently contains: /Users/lhunath/.bin, /usr/local/bin, /usr/bin, /bin, /usr/libexec

The examples above use the %, # and // operators to perform various operations on the parameter's value before expanding the result. The parameters themselves aren't changed; the operator only affects the value that gets expanded into place. You'll also notice that we can use glob patterns here, just like we did during pathname expansion, to match against the values in our parameter.

In the first case, we used the % operator to remove the . and the number after it from time's value before expanding it. That left us with just the part in front of the ., which is the seconds. The second case did something similar, we used the # operator to remove a part from the start of the time value. Finally, we used the // operator, (which is really a special case of the / operator), to replace every : character in PATH's value with , . The result is a list of directories that is easier to read for people than the original colon-separated PATH.

Operator Example Result
${parameter#pattern} "${url#*/}"
Remove the shortest string that matches the pattern if it's at the start of the value.
${parameter##pattern} "${url##*/}"
Remove the longest string that matches the pattern if it's at the start of the value.
${parameter%pattern} "${url%/*}"
Remove the shortest string that matches the pattern if it's at the end of the value.
${parameter%%pattern} "${url%%/*}"
Remove the longest string that matches the pattern if it's at the end of the value.
${parameter/pattern/replacement} "${url/./-}"
Replace the first string that matches the pattern with the replacement.
${parameter//pattern/replacement} "${url//./-}"
Replace each string that matches the pattern with the replacement.
${parameter/#pattern/replacement} "${url/#*:/https:}"
Replace the string that matches the pattern at the beginning of the value with the replacement.
${parameter/%pattern/replacement} "${url/%.html/.jpg}"
Replace the string that matches the pattern at the end of the value with the replacement.
${#parameter} "${#url}"
Expand the length of the value (in bytes).
${parameter:start[:length]} "${url:7}"
Expand a part of the value, starting at start, length bytes long. You can even count start from the end rather than the beginning by using a (space followed by a) negative value.
${parameter[^|^^|,|,,][pattern]} "${url^^[ht]}"
Expand the transformed value, either upper-casing or lower-casing the first or all characters that match the pattern. You can omit the pattern to match any character.



EXPAN.1. Assign hello to the variable greeting.


EXPAN.2. Show the contents of the variable greeting.

echo "$greeting"

EXPAN.3. Assign the string world to the end of the variable's current contents.

greeting="$greeting world"
greeting+=" world"+= appends the string to the end of the current value.

EXPAN.4. Show the last word in the variable greeting.

echo "${greeting##* }"

EXPAN.5. Show the contents of the variable greeting with the first character upper-cased and a period (.) at the end.

echo "${greeting^}."
Hello world.

EXPAN.6. Replace the first space character in the variable's contents with big .

greeting=${greeting/ / big }

EXPAN.7. Redirect the contents of the variable greeting into a file whose name is the value of the variable with the spaces replaced by underscores (_) and a .txt at the end.

echo "$greeting" > "${greeting// /_}.txt"

EXPAN.8. Show the contents of the variable greeting with the middle word fully upper-cased.

middle=${greeting% *} middle=${middle#* }; echo "${greeting%% *} ${middle^^} ${greeting##* }"
hello BIG world

What is the environment and what is it used for?

There are two separate spaces where variables are kept. These separate spaces are often confused, leading to many misunderstandings. You've already become familiar with the first: shell variables. The second space where variables are kept is the process environment. We'll introduce environment variables and explain how they differ from shell variables.

Environment Variables

Unlike shell variables, environment variables exist at the process level. That means they are not a feature of the bash shell, but rather a feature of any program process on your system. If we imagine a process as a piece of land you buy, the building we put on the land will be the code running in your process. You could put a bash house or a grep shack or a firefox tower on the land. Environment variables are variables stored on your process' land itself, while shell variables are stored inside the bash house built on your land.
You can store variables in the environment and you can store variables in the shell. The environment is something every process has, while the shell space is only available to bash processes. As a rule, you should put your variables in the shell space unless you explicitly require the behaviour of environment variables.

    ╭─── bash ─────────────────────────╮
    │             ╭──────────────────╮ │
    │ ENVIRONMENT │ SHELL            │ │
    │             │ shell_var1=value │ │
    │             │ shell_var2=value │ │
    │             ╰──────────────────╯ │
    │ ENV_VAR1=value                   │
    │ ENV_VAR2=value                   │

When you run a new program from the shell, bash will run this program in a new process. When it does, this new process will have its own environment. But unlike shell processes, ordinary processes do not have shell variables. They only have environment variables. More importantly, when a new process is created, its environment is populated by making a copy of the environment of the creating process:

    ╭─── bash ───────────────────────╮
    │             ╭────────────────╮ │
    │ ENVIRONMENT │ SHELL          │ │
    │             │ greeting=hello │ │
    │             ╰────────────────╯ │
    │ HOME=/home/lhunath             │
    │ PATH=/bin:/usr/bin             │
      ╎  ╭─── ls ─────────────────────────╮
      └╌╌┥                                │
         │ ENVIRONMENT                    │
         │                                │
         │ HOME=/home/lhunath             │
         │ PATH=/bin:/usr/bin             │

It is a common misconception that the environment is a system-global pool of variables that all processes share. This illusion is often the result of seeing the same variables available in child processes. When you create a custom environment variable in the shell, any child processes you create afterwards will inherit this variable as a result of it being copied from your shell into the child's environment. However, since the environment is specific to each process, changing or creating new variables in the child will in no way affect the parent:

    ╭─── bash ───────────────────────╮
    │             ╭────────────────╮ │
    │ ENVIRONMENT │ SHELL          │ │
    │             │ greeting=hello │ │
    │             ╰────────────────╯ │
    │ HOME=/home/lhunath             │
    │ PATH=/bin:/usr/bin             │
    │ NAME=Bob                       │
      ╎  ╭─── bash ───────────────────────╮
      └╌╌┥             ╭────────────────╮ │
         │ ENVIRONMENT │ SHELL          │ │
         │             ╰────────────────╯ │
         │ HOME=/home/lhunath             │
         │ PATH=/bin:/usr/bin             │
         │ NAME=Bob                       │

$ NAME=John

    ╭─── bash ───────────────────────╮
    │             ╭────────────────╮ │
    │ ENVIRONMENT │ SHELL          │ │
    │             │ greeting=hello │ │
    │             ╰────────────────╯ │
    │ HOME=/home/lhunath             │
    │ PATH=/bin:/usr/bin             │
    │ NAME=Bob                       │
      ╎  ╭─── bash ───────────────────────╮
      └╌╌┥             ╭────────────────╮ │
         │ ENVIRONMENT │ SHELL          │ │
         │             ╰────────────────╯ │
         │ HOME=/home/lhunath             │
         │ PATH=/bin:/usr/bin             │
         │ NAME=John                      │

This distinction also makes it clear why one would opt to put certain variables in the environment. While most of your variables will be ordinary shell variables, you may opt to "export" some of your shell variables into the shell's process environment. In doing so, you're effectively exporting your variable's data to each child process you create, and those child processes will in turn export their environment variables to their children. Your system uses environment variables for all sorts of things, mainly to provide state information and default configurations for certain processes.

For instance, the login program, which is traditionally used to log a user into the system, exports information about your user into the environment (USER containing your user name, HOME containing your home directory, PATH containing a standard command search path, etc.). All processes that run as a result of you logging in can now learn what user they're running for by looking at the environment.

You can export your own variables into the environment. This is often done to configure the behavior of any programs you run. For instance, you can export LANG and assign it a value that tells programs what language and character set they should use. Environment variables are generally only useful to those programs that know about and support them explicitly. Some variables have a very narrow usage, for instance LSCOLORS can be used by some ls programs to colorize their output of files on your system.

    ╭─── bash ───────────────────────╮
    │             ╭────────────────╮ │
    │ ENVIRONMENT │ SHELL          │ │
    │             │ greeting=hello │ │
    │             ╰────────────────╯ │
    │ HOME=/home/lhunath             │
    │ PATH=/bin:/usr/bin             │
    │ LANG=en_CA                     │
    │ PAGER=less                     │
    │ LESS=-i -R                     │
      ╎  ╭─── rm ─────────────────────────╮rm uses just LANG if present to determine
      ├╌╌┥                                │the language of its error messages.
      ╎  │ ENVIRONMENT                    │
      ╎  │                                │
      ╎  │ HOME=/home/lhunath             │
      ╎  │ PATH=/bin:/usr/bin             │
      ╎  │ LANG=en_CA                     │
      ╎  │ PAGER=less                     │
      ╎  │ LESS=-i -R                     │
      ╎  ╰────────────────────────────────╯
      ╎  ╭─── man ────────────────────────╮In addition to LANG, man uses PAGER to determine
      └╌╌┥                                │what program to use for paginating long manuals.
         │ ENVIRONMENT                    │
         │                                │
         │ HOME=/home/lhunath             │
         │ PATH=/bin:/usr/bin             │
         │ LANG=en_CA                     │
         │ PAGER=less                     │
         │ LESS=-i -R                     │
           ╎  ╭─── less ───────────────────────╮less makes use of the LESS variable to supply
           └╌╌┥                                │an initial configuration for itself.
              │ ENVIRONMENT                    │
              │                                │
              │ HOME=/home/lhunath             │
              │ PATH=/bin:/usr/bin             │
              │ LANG=en_CA                     │
              │ PAGER=less                     │
              │ LESS=-i -R                     │

Shell Initialization

When you start an interactive bash session, bash will prepare itself for usage by reading a few initialization commands from different files on your system. You can use these files to tell bash how to behave. One in particular is intended to give you the opportunity to export variables into the environment. The file is called .bash_profile and it lives in your home directory. There's a good chance that you don't have this file yet; if this is the case, you can just create the file and bash will find it the next time it goes looking for it.

At the very end of your ~/.bash_profile, you should have the command source ~/.bashrc. That's because when .bash_profile exists, bash behaves a little curious in that it stops looking for its standard shell initialization file ~/.bashrc. The source command remedies this oddity.

Note that if there is no ~/.bash_profile file, bash will try to read from ~/.profile instead, if it exists. The latter is a generic shell profile configuration file, which is also read by other shells. You can opt to put your environment configuration there instead, but if you do, you need to be aware that you should limit yourself to POSIX sh syntax and not use any bash-specific shell syntax in the file. POSIX sh syntax is similar to bash but it is beyond the scope of this guide.

    loginThe login program signs the user in
      ╰─ -bashThe login command starts the user's login shell
         ╰─ screenThe user runs the screen program from his login shell
              ╰─ weechatThe screen program creates multiple windowsand allows the user to switch between them. 
              ╰─ bash   The first runs an IRC client, two others run anon-login bash shell. 
              ╰─ bash

This process tree depicts a user who uses bash as his login shell and multiplexes his terminal to create several separate "screens", allowing him to interact with multiple concurrently running programs. After logging in, the system (the login program) determines the user's login shell. It might do this, for example, by looking at /etc/passwd. In this case, the user's login shell is set to bash. login proceeds by running bash and setting its name to -bash. It is standard procedure for the login program to prefix the name of the login shell with a - (dash), indicating to the shell that it should behave as a login shell.

Once the user has a running bash login shell, he runs the screen program. While screen is running, it takes over the user's entire terminal and emulates multiple terminals within it, allowing the user to switch between them. In each emulated terminal, screen runs a new program. In this case, the user has screen configured to start one emulated terminal that runs an IRC client, and two that run interactive (but non-login) bash shells. Here's what that would look like in practice:

Let's take a look at how the initialization happens in this scenario, and where the environment variables come from:

      │ TERM=dumbUSER=lhunathHOME=/home/lhunathPATH=/usr/bin:/bin
      ╰─ -bash
         │ TERM=dumb
         │ USER=lhunath
         │ HOME=/home/lhunath
         │ PATH=/usr/bin:/binPWD=/home/lhunathSHLVL=1
         │╭──────────────╮     ╭────────────────────────╮╭──────────────────╮
         ┝┥ login shell? ┝─yes─┥ source ~/.bash_profile ┝┥ source ~/.bashrc │
         │╰──────────────╯     ╰────────────────────────╯╰──────────────────╯
         │ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/libexecEDITOR=vimLANG=en_CA.UTF-8LESS=-i -M -R -W -SGREP_COLOR=31
         ╰─ screen
              │ TERM=dumbTERM=screen-bce
              │ USER=lhunath
              │ HOME=/home/lhunath
              │ PATH=/usr/bin:/bin
              │ PWD=/home/lhunath
              │ SHLVL=1
              │ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/libexec
              │ EDITOR=vim
              │ LANG=en_CA.UTF-8
              │ LESS=-i -M -R -W -S
              │ GREP_COLOR=31
              │ WINDOW=0
              ╰─ weechat
              ╰─ bash
              │    │╭──────────────╮
              │    ╰┥ login shell? ┝
              │     ╰──────┰───────╯
              │            no
              │     ╭──────┸───────╮     ╭──────────────────╮
              │     │ interactive? ┝─yes─┥ source ~/.bashrc │
              │     ╰──────────────╯     ╰──────────────────╯
              ╰─ bash
                   ╰┥ login shell? ┝
                    ╭──────┸───────╮     ╭──────────────────╮
                    │ interactive? ┝─yes─┥ source ~/.bashrc │
                    ╰──────────────╯     ╰──────────────────╯

As you can see, different levels export their own variables into the environment. Each child process inherits the variables from its parent's environment. In turn, it can overwrite some of these values or add new variables.

Notice how the first (login) bash sources both ~/.bash_profile and ~/.bashrc while the bottom two source only ~/.bashrc. That's because only the first bash process is started as a "login shell" (by means of having a - in front of its name). The bottom two bash processes are ordinary interactive shells. The reason they have no need for sourcing ~/.bash_profile is now becoming more obvious: the responsibility of ~/.bash_profile is to set up bash's environment, and the bottom two shells are already inheriting the environment from their login shell ancestor.

What else can I use parameters for?

As we mentioned earlier in the chapter, there are positional parameters, special parameters and variables. Variables are essentially parameters with a name. We're going to have a closer look at the different kinds of parameters and how they allow you to get specific information from the shell or change certain behaviours of the shell.

Positional Parameters

Where variables are parameters with a name, positional parameters are parameters with a number (more specifically, a positive integer number). We expand these parameters using the normal parameter expansion syntax: $1, $3. It's important to note, though, that bash requires you to employ curly braces around positional parameters of more than one digit: ${10}, ${22} (in practice, you will rarely if ever need to explicitly refer to positional parameters this high up).

Positional parameters expand to values that were sent into the process as arguments when it was created by the parent. For instance, when you start a grep process using this command:

$ grep Name registrations.txt

You're effectively running the grep command with the arguments Name and registrations.txt. If grep were a bash script, the first argument would be available in the script by expanding $1 and the second argument by expanding $2. Positional parameters higher than 2 will be unset.

It's good to know that there is also a zero'th positional parameter. This positional parameter expands to the name of the process. The name of the process is chosen by the program that creates it, so the zero'th argument can really contain anything and is entirely up to your script's parent. Most shells will use the absolute path of the file that they ran to start the process as the name of the process, or the command that the user executed to start the process. Be aware that this is by no means a requirement and you cannot make any reliable assumptions on the contents of the zero'th argument: it is best avoided for all intents and purposes.

What's nice and extremely convenient: most of what we've learned thus far about variable parameters applies to positional parameters as well: we can expand them and we can apply parameter expansion operators on these expansions to mutate the resulting values:

#!/usr/bin/env bash
echo "The Name Script"
echo "usage: names 'My Full Name'"; echo

first=${1%% *} last=${1##* } middle=${1#$first} middle=${middle%$last}
echo "Your first name is: $first"
echo "Your last name is: $last"
echo "Your middle names are: $middle"

If you save this script in a file called names and run it according to the usage description, by passing a single argument to it, you'll see the script analyse your name and inform you which part of your name constitute the first, last and middle names. We're using the variables first, last and middle to store these pieces of information for later, when we expand the variables in the echo statements. Notice how the computation of the middle name requires both the knowledge of the full name (available from the first positional parameter) and the first name (which was previously computed and stored in the variable first).

$ chmod +x names
$ ./names 'Maarten Billemont'
The Name Script
usage: names 'My Full Name'

Your first name is: Maarten
Your last name is: Billemont
Your middle names are: 
$ ./names 'James Tiberius "Jim" Kirk'
The Name Script
usage: names 'My Full Name'

Your first name is: James
Your last name is: Kirk
Your middle names are:  Tiberius "Jim"

It is important to understand that, unlike most variables, positional parameters are read-only parameters. On reflection, it likely makes sense to you that one cannot change the arguments to your script from within your script. As such, this is a syntax error:

$ 1='New First Argument'
-bash: 1=New First Argument: command not found

While the error message is slightly confounding, it indicates that bash doesn't even recognize this statement as an attempt to assign a value to a variable (since the parameter 1 is not a variable) and instead thinks you have given it the name of a command you want to run.

There is, however, a built-in command we can use to change the values of the set of positional parameters. While this is a common practice in ancient shells that lack bash's more advanced features, you will rarely if ever have a need for this in bash. To modify the current set of positional parameters, use the set command and specify the new positional parameters as arguments after the -- argument:

$ set -- 'New First Argument' Second Third 'Fourth Argument'
$ echo "1: $1, 2: $2, 4: $4"
1: New First Argument, 2: Second, 4: Fourth Argument

In addition to changing the set of positional parameters, there is also the shift built-in that can be used for "pushing" our set of positional parameters around. When we shift positional parameters, we essentially push them all toward the beginning, causing the first few positional parameters to get bumped off to make way for the others:

New First Argument Second Third Fourth Argument
$ shift 2Push the positional parameters back 2.
Third Fourth Argument <----The first two disappeared and the third is now in first spot with the fourth in second place.

Finally, when starting a new bash shell using the bash command, there is a way to pass in positional parameters. This is a very useful way of passing a list of arguments to an inline bash script. You will use this method later when you combine inline bash code with other utilities, but for now this is a great way of experimenting with positional parameters without having to create a separate script to invoke and pass arguments to (such as we did with the names example above). Here's how to run an inline bash command and pass in a list of arguments to populate the positional parameters:

$ bash -c 'echo "1: $1, 2: $2, 4: $4"' -- 'New First Argument' Second Third 'Fourth Argument'
1: New First Argument, 2: Second, 4: Fourth Argument

We run the bash command, passing the -c option followed by an argument that contains some bash shell code. This will tell bash that instead of starting a new interactive bash shell, you want to just have the shell run the provided bash code and finish. After the shell code, we specify the arguments to use for populating the positional parameters. The first argument in our example is --, and while this argument is technically used to populate the zero'th positional parameter, it is a good idea to always use -- for the sake of compatibility and to make clear the separation between bash's arguments and the arguments to your shell code. After this argument, each argument populates the standard positional parameters as you would expect.

If we used double-quotes in the example above, the shell that we're typing the bash command into would expand the $1, $2 and $4 expansions instead, resulting in a broken argument to the -c option.

To illustrate this point, compare our good example from above:

$ bash -vc 'echo "1: $1, 2: $2, 4: $4"' -- \We pass the -v argument to bash to show us the code it is going to run before the result.
'New First Argument' Second Third 'Fourth Argument'We can use \ at the end of a line to resume on a new line.
echo "1: $1, 2: $2, 4: $4"Here is the code it is going to run.
1: New First Argument, 2: Second, 4: Fourth ArgumentAnd this is the result.

to what would happen if we used double quotes around the -c argument instead of single quotes:

$ bash -vc "echo "1: $1, 2: $2, 4: $4"" -- \The outer double-quotes conflict with the inner double-quotes, leading to ambiguity.
'New First Argument' Second Third 'Fourth Argument'
echo 1:As a result, the argument to -c is no longer the entire bash code but only the first word of it.
$ bash -vc "echo \"1: $1, 2: $2, 4: $4\"" -- \Even if we fix the quoting ambiguity, the $1, $2 and $4 are now evaluated by the shell we're typing this command into,
'New First Argument' Second Third 'Fourth Argument'not the shell we pass the arguments to.
echo "1: , 2: , 4: "Since $1, $2 and $4 are likely empty in your interactive shell, they will expand empty and disappear from the -c argument.
1: , 2: , 4:

We could go as far as to fix all of the issues inside the double quotes by backslash-escaping all of the special characters, including the double quotes and dollar signs. This would fix the issue, but it makes the shell code look extremely convoluted and hard to read. Maintaining shell code that has been escaped in a special way like this is a nightmare and begs for accidental mistakes that are hard to spot:

$ bash -vc "echo \"1: \$1, 2: \$2, 4: \$4\"" -- \
'New First Argument' Second Third 'Fourth Argument'
echo "1: $1, 2: $2, 4: $4"
1: New First Argument, 2: Second, 4: Fourth Argument

Special Parameters

Understanding positional parameters makes understanding special parameters much easier: they are very similar. Special parameters are parameters whose name is a single symbolic character, they are used to request certain state information from the bash shell. Here are the different kinds of special parameters and the information they hold:

Parameter Example Description
"$*" echo "Arguments: $*" Expands a single string, joining all positional parameters into one, separated by the first character in IFS (by default, a space).
Note: You should never use this parameter unless you explicitly intend to join all the parameters. You almost always want to use @ instead.
"$@" rm "$@" Expands the positional parameters as a list of separate arguments.
"$#" echo "Count: $#" Expands into a number indicating the amount of positional parameters that are available.
"$?" (( $? == 0 )) || echo "Error: $?" Expands the exit code of the last (synchronous) command that just finished.
An exit code of 0 indicates the command succeeded, any other number indicates why the command failed.
"$-" [[ $- = *i* ]] Expands to the set of option flags that are currently active in the shell.
Option flags configure the behaviour of the shell, the example tests for the presence of the i flag, indicating the shell is interactive (has a prompt) and is not running a script.
"$$" echo "$$" > /var/run/myscript.pid Expands a number that's the unique process identifier for the shell process (that's parsing the code).
"$!" kill "$!" Expands a number that's the unique process identifier of the last process that was started in the background (asynchronously).
The example signals the background process that it's time to terminate.
"$_" mkdir -p ~/workspace/projects/myscripts && cd "$_" Expands to the last argument of the previous command.

Just like positional parameters, special parameters are read-only: you can only use them to expand information, not store information.

Shell Internal Variables

You already know what shell variables are. Were you aware that the bash shell also creates a few variables for you? These variables are used for various tasks and are handy for looking up certain state information from the shell or changing certain shell behaviours.

While bash actually defines quite a lot of internal shell variables, most of them are not very useful. Others have use but only in very specific scenarios. Many of these variables require you to understand more advanced bash concepts. I will briefly mention a few internal shell variables that are interesting to learn about at this stage. The full list of internal shell variables can be found in man bash.

BASH /usr/local/bin/bash
This variable contains the full pathname of the command that started the bash you are currently in.
BASH_VERSION 4.4.0(1)-release
A version number that describes the currently active version of bash.
BASH_VERSINFO [ 4, 4, 0, 1, release, x86_64-apple-darwin16.0.0 ]
An array of detailed version information on the currently active version of bash.
BASH_SOURCE myscript
This contains all the filenames of the scripts that are currently running. The first is the script that's currently running.
Usually it is either empty (no scripts running) or contains just the pathname of your script.
This contains the process ID of the bash that is parsing the script code.
UID 501
Contains the ID number of the user that's running this bash shell.
HOME /Users/lhunath
Contains the pathname of the home directory of the user running the bash shell.
HOSTNAME myst.local
The name of your computer.
Used to indicate your preferred language category.
MACHTYPE x86_64-apple-darwin16.0.0
A full description of the type of system you are running.
PWD /Users/lhunath
The full pathname of the directory you are currently in.
OLDPWD /Users/lhunath
The full pathname of the directory you were in before you came to the current directory.
RANDOM 12568
Expands a new random number between 0 and 32767, every time.
SECONDS 338217
Expands the number of seconds your bash shell has been running for.
Contains the height (amount of rows or lines) of your terminal display.
Contains the width (amount of single-character spaces) of a single row in your terminal display.
IFS $' \t\n'
The "Internal Field Separator" is a string of characters that bash uses for word-splitting of data. By default, bash splits on spaces, tabs and newlines.
PATH /Users/lhunath/.bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/libexec
The list of paths bash will search for executable programs when you run a command.
PS1 \s-\v\$
This contains a string that describes what the interactive bash shell's prompt should look like.
PS2 >
This contains a string that describes what the interactive bash shell's secondary prompt should look like. The secondary prompt is used when you finish a command line but the command isn't yet complete.

As I mentioned, there are many other internal shell variables, but they each serve very specific advanced cases that are uninteresting at this point. Chances are, if you're looking for some information on how bash is currently operating, you can find it in one of its internal shell variables.


PARAM.1. Start a new bash shell that outputs its first argument and pass Hello World! in as an argument to it.

bash -c 'echo "$1"' -- 'Hello World!'

PARAM.2. Start a bash shell that outputs the number of arguments passed in and pass in the arguments 1, 2 and The Third.

bash -c 'echo "$#"' -- 1 2 'The Third'

PARAM.3. Start a bash shell that shifts a positional parameter away and then outputs the first. Pass in the arguments 1, 2 and The Third.

bash -c 'shift; echo "$1"' -- 1 2 'The Third'

PARAM.4. Start a bash shell that outputs the last argument passed in and pass in the arguments 1, 2 and The Third.

bash -c 'echo "${@: -1}"' -- 1 2 'The Third'


Last but certainly not least, we arrive at perhaps the most interesting kind of bash parameter: the arrays.

What are arrays and why should I use them?

An array is a fancy word for a parameter that can hold not one string, but a whole list of strings. The concept of storing lists of things is not new - we've seen it done before in this very guide, such as PATH storing a list of directory pathnames for bash to find command programs. Arrays, however, were introduced to address a very important problem that arises when you use simple string variables to store lists of things.

The problem with storing lists of things inside a simple variable is that when you become interested in the separate elements of this list, you inevitably need to split this single variable apart into those separate elements. Most of us, however, don't even notice this as a problem: as human beings, we are extremely well adapt at doing this contextually. When we see a name such as Leonard Cohen, we recougnize that it consists of two separate names that together make a single person's full name. When we now look at a string such as Leonard Cohen - Adam Cohen - Lorca Cohen, we immediately recougnize it as a list of three distinct names: we instantly recougnize the pattern in this string that's separating names with a dash. In fact, we're so good at this, that we don't usually even need to stop and think when we see a list of names such as Susan Q. - Mary T. - Steven S. - Anne-Marie D. - Peter E.. We're even great at finding the relevant contextual units in larger strings such as poems that consist of lines and paragraphs.

Unfortunately, though, when we start thinking in terms of having the computer handle data for us, we need to stop thinking about our excellent human abstractions and put on our cognitive baby shoes. A computer doesn't know that Susan Q. - Mary T. - Steven S. - Anne-Marie D. - Peter E. is a list of names, it certainly doesn't know that these names are delimited by dashes and it most definitely wouldn't be able to guess that Anne-Marie is a single name as opposed to two distinct people in the list.

A good way of being explicit about the separate elements in our list is by using arguments to our commands. Remember when we learned about quoting? This is, in fact, a great time to recall our quoting lessons.

$ ls -l 05 Between Angels and Insects.ogg

In this command, we are passing the ls command a list of many arguments and bash will assume each argument as a separate filename. This is obviously not the intended effect, but bash is not as good at deriving contextual sense from arbitrary data as we, humans. It is therefore important that we are explicit about what the elements of our list are:

$ ls -l "05 Between Angels and Insects.ogg"

Now that we have made it clear to bash that our list contains only a single filename, and that filename itself contains several words, the ls command is capable of doing its job properly.

The very same problem exists with variables. What if we wanted to create a variable that contains a list of all the files we want to delete? How do we create such a list in a way that we can then pass each distinct element of that list to an rm command for deletion, without running the risk that bash will misunderstand how our file names need to be interpreted?

The answer is arrays:

$ files=( myscript hello.txt "05 Between Angels and Insects.ogg" )
$ rm -v "${files[@]}"

To create an array variable, bash introduces a slightly different assignment operator: =( ). As with the standard =, we put the name of our variable on the left hand side of the operator, however, the list of values to assign to this variable should go nicely inbetween the ( and ) braces.

You might recall from our section on variable assignment that it was critical we do not put syntactical spaces around our assignment values: spaces after the = cause bash to split the assignment into a command name and argument pair; unquoted spaces in our assignment value cause bash to split the value into a partial assignment followed by a command name. With this new array assignment syntax, spaces are freely permitted inside the braces, and in fact, they are used to separate the many elements of your array's list of values. But just like regular variable assignment, when space needs to be part of the variable's data, it must be quoted so that bash interpretes the space as literal. Notice in the example above that we use syntactical spacing between myscript and hello.txt, allowing bash to understand these two words as being distinct elements of the list, while we use literal spacing inbetween the words 05 and Between - the space here is part of the filename, it should not cause bash to break the words into separate list elements: the space is literal, and as such, we have quoted it.

In fact, these syntactical rules are nothing new. We already know how to pass distinct arguments to our commands, and passing distinct elements to our array assignment operator is no different.

Finally, after creating a list of files, we expand our parameter for our rm command. If you recall from the parameter expansion section above, expansion happens by prefixing our parameter name with a $-sign. Contrary to regular parameter expansion, though, we are not interested in expanding a single argument: what we want to do is expand every element in our list as a separate and distinct argument to the rm command. To do this, we suffix our parameter name with [@] and we are now required to wrap the whole using curly braces ({ }) to ensure bash understands the whole as a single parameter expansion unit. The expansion of the files parameter using the "${files[@]}" syntax effectively results in this:

$ rm -v myscript hello.txt "05 Between Angels and Insects.ogg"
removed 'myscript'
removed 'hello.txt'
removed '05 Between Angels and Insects.ogg'

Bash neatly expands each separate element of our array list as a separate argument to the rm command!

Congratulations! You now understand the most powerful data structure in the bash shell language.

What else can I do with arrays?

In addition to array assignment and array expansion, bash provides some other operations that we can perform on arrays:

$ files+=( selfie.png )Using the +=( ) operator we can append a list of items to the end of an array.
$ files=( *.txt )Just like in a command's arguments, we can expand glob patterns here.
$ echo "${files[0]}"To expand a single item from an array, specify that item's ordinal number.
$ echo "$files"If we forget the array expansion syntax, bash will expand only the first item.
$ unset "files[3]"To remove a specific item from the array, we use unset.
Notice though: we do not use a $ here, since we are not expanding the value!

In addition to the [@] suffix for expanding array elements as distinct arguments, bash also has a way of expanding all array elements into a single argument. This is done using the [*] suffix. How does bash merge all of our separate elements into a single argument? There are many ways in which we might expect it to this - does it create a space-separated string? Does it squeeze all elements together into a long string with nothing delimiting the elements? Perhaps it could create a single string where each element is on a separate line? The fact is that for all the reasons illustrated above, there isn't a single strategy for merging distinct elements into a single string that doesn't come with problems. This operator is therefore highly dubious and should be avoided in favour of [@] in nearly all cases!

The fact is, bash allows you to choose how to merge elements into a single string when you use [*]: by looking at the current value of the IFS internal shell variable. Bash uses this variable's first character (which, by default, is a space) to separate the elements in the resulting string:

$ names=( "Susan Quinn" "Anne-Marie Davis" "Mary Tate" )
$ echo "Invites sent to: <${names[*]}>."Results in a single argument where elements are separated by a literal space.
Invites were sent to: <Susan Quinn Anne-Marie Davis Mary Tate>.
$ ( IFS=','; echo "Invites sent to: <${names[*]}>." )When we change IFS to a ,, the distinct elements become more clear.
Invites were sent to: <Susan Quinn,Anne-Marie Davis,Mary Tate>.

Since a single string containing multiple distinct elements is almost always flawed and less useful than an array variable with those elements nicely separated, there happens to be very little real use for the [*] suffix. With one exception: this operator is quite useful for displaying a list of elements to the user. When we're trying to show an array's values to a human, we need not be so worried about the syntactical correctness of the output. The example above, where IFS is changed to , illustrates a common way of displaying the values in an array to the user.

Finally, all of the special parameter expansion operators we learned about previously can also be applied to array expansions, but we're going to re-iterate some of them as their effect is quite interesting in the context of expanding multiple distinct elements.

For starters, the ${parameter[@]/pattern/replacement} operator and all of its variants has its replacement logic applied to each element distinctly, as it's being expanded:

$ names=( "Susan Quinn" "Anne-Marie Davis" "Mary Tate" )
$ echo "${names[@]/ /_}"Replace spaces by underscores in each name.
Susan_Quinn Anne-Marie_Davis Mary_Tate
$ ( IFS=','; echo "${names[*]/#/Ms }" )More interestingly: replace the start of each name with Ms ,
Ms Susan Quinn,Ms Anne-Marie Davis,Ms Mary Tateeffectively prefixing every element with a string as we expand them.

The ${#parameter} operator combined with the [@] suffix gives us a count of the elements:

$ echo "${#names[@]}"
$ echo "${#names[1]}"But we can still get the length of a string by specifying directly
16which string element in the array we want to get the length of.

And lastly, the ${parameter[@]:start:length} operator can be used to obtain slices or "sub-sets" of our array:

$ echo "${names[@]:1:2}"
Anne-Marie Davis Mary Tate
$ echo "${names[@]: -2}"Specifying a negative start allows us to count backwards from the end!
Anne-Marie Davis Mary TateWhile omitting the length yields "all remaining elements" from the start.

Notice that it is important to include a space in front of the negative start value: if we omit the space, bash gets confused and thinks you are trying to invoke the ${parameter:-value} operator, which substitutes a default value whenever parameter's value is empty. This is obviously not what we want.

That's it! You've got a firm grasp of the absolutely most important and useful aspects of the bash shell language: its parameters and expansions, as well as the many ways in which we can apply operators to expanded values and shape them to our every needs.

Fork me on GitHub