How to easily manipulate long paths in Linux

I found this little gem a while ago and a coworker identified it as useful too, so I’m posting it:

Let’s say you want to copy a file from a different directory to a new file in that same directory but don’t want to change to that directory, and the path is really long…

cp /my/very/long/path/to/{old_file,new_file}.cpp

This will copy /my/very/long/path/to/old_file.cpp to /my/very/long/path/to/new_file.cpp

Advertisements

(Off-Topic) A column of pixels on my Samsung Galaxy S7 Edge went out, and Samsung told me tough luck buddy!

Sorry to go off-topic on you, but I really feel the need to post this – if for no other purpose than to vent.

I purchased my Samsung Galaxy S7 Edge shortly after they came out.  I paid full price and bought the phone out right. I was very happy with the look, feel, and speed of the phone.

Samsung provides a 1 year warranty on these products.  About 2 months ago a column of pixels on the phone went out (at the 1 year and 4 month point):

SamsungS7Edge.png

I called Samsung to discuss, and they told me that it would cost $70 to send it in and have it checked. I sent the phone in to hopefully get it repaired.

They came back to me with a bill of $300 and sent the phone back to me without talking to me. They did not repair the phone.

I called Samsung to see if they could help me out.  A Samsung Galaxy S7 Edge costs ~$600 now and with T-Mobile’s buy one get one free it’s even cheaper. $300 would be half the cost of a new phone to repair.

They told me there was damage to the phone and there was nothing they could do for me because I am out of warranty. Does that image look like there was damage? I treated this phone very well, and there are minimal scratches on it from my pocket – the phone is not damaged.

So let’s reflect on the experience:

  • I purchased Samsung’s flagship phone right when it came out at full price
  • The phone had a column of pixels that went bad (without a cause) and did so just outside of the manufactures warranty – to me this is a manufacturing defect
  • I had to send in my phone for over a week (truly inconvenient)
  • They sent the phone back before talking to me about the repair (i.e. if I did want it fixed I’d have to ship it back again)
  • Samsung told me that the phone had damage on the screen – it does not
  • Samsung charged me $70 to tell me this

Good customer service in this case would’ve been to extend the manufactures warranty or at least offer to help on the cost to fix the screen. I should not have to send the phone in a second time to get the repair. I also think that $300 to fix a screen is ridiculous.

Samsung, you should never blame the customer for something they did not do and charge them to do it. If your intention in providing this appalling customer service was to drive me to purchase your new phone, well you’ve done quite the opposite.

I used to spend more than $2000 a year on your Samsung products. Think of it this way: If you would’ve swallowed $300 charge and helped me fix my phone, you would’ve made that back from my purchases next year. I can tell you next year I will spend $0 on Samsung products.

In the words of Mr. Wonderful:

Samsung, you are dead to me.

The differences between Linux shells (namely BASH and TCSH)

Overview

The two most common shells in Linux are CSH and BASH. Shells are programs used to allow a user to enter and execute commands as well as to interpret entire files of commands called scripts. You can work with Graphical User Interfaces (GUIs) in Linux but the real power of Linux is that it was designed around Command Line Interfaces (CLIs) and thus you realize significant benefits when working with CLIs from a shell.

I would recommend you pick one of these shells and learn it, and that you learn it well.

The Korn shell (KSH) is another fairly popular shell which extends from the Bourne shell (sh), just as BASH does, but I won’t spend a lot of time talking about KSH here.  It is worth learning a little about KSH, you may like the fact that it has backward compatibility with sh and it has an interactive style similar to CSH.

C shell – CSH/TCSH

The C shell shares a lot of the flow control constructs as the C programming language and thus this shell is dubbed the “C shell”. The Tee-shell, Tee-cee-sheel, or Tee-cee-ess-aitch (TCSH) is really just CSH with command-line completion and other features, most people today who are using CSH are really using TCSH, yet only refer to their shell as CSH. For the remained of this post CSH will be used but it will be referring to TCSH as a whole.

Bourne again shell – BASH

The Bourne shell (SH) was the standard shell in older versions of UNIX (starting with the Seventh Edition). As part of the GNU project, the Bourne shell was re-implemented to provide interactive features and this newer version of the Bourne shell is called the Bourne again shell (BASH). Almost all UNIX based systems today are still delivered with SH or what is really BASH emulating SH. Of the popular shells, BASH/SH is the oldest and most widely used.

Because every UNIX based system is delivered with BASH you will undoubtedly have to use it if you are working on Linux for an extended period of time.

Regardless of your preferred shell, I would also recommend you learn the basics of BASH.

Interactive features of CSH and BASH

CSH

One benefit of CSH is that it has a truly interactive feature not found in BASH. For instance, the following lines are evaluated interactively (i.e. line-by-line) in CSH:

> if ( 1 ) then
>  echo true
true
> endif

BASH

While SH (Bourne Shell) does not support interactive mode supposedly BASH does. However, this might behave differently than you would expect, as BASH commands are evaluated in logical blocks.

There are some subtle differences between CSH and BASH interactive modes. For example, BASH will evaluate blocks at the end whereas CSH will evaluate blocks as they are entered. Here is the same example as provided for CSH, but now provided for BASH:

$ if [ 1 ]; then
>  echo true
> fi
true

Both shells behave very similar, with one difference and that is: BASH evaluates in blocks of logic in the interactive mode while CSH does the evaluation line-by-line.

Variables

Shell Variables

BASH

In BASH you can set a shell variable using:

$ abc=123

BASH does not handle spaces very well:

$ abc = 123
BASH: abc: command not found

When we say a shell variable we mean that child processes will not inherit the variable.  This enables you to set variables and work with them locally, and to be sure that what you did won’t have some perverse effect on child processes that you start.

To demonstrate what a shell variable means, let’s create a run.sh script:

#!/bin/bash
echo switch:$switch

To execute this script, we would use (make sure it has executable permissions [chmod +x ./run.sh]):

$ ./run.sh
switch:

From here you will see that nothing is printed for the switch variable.

CSH

In CSH you can set a shell variable using:

> set abc=123

CSH trims spaces:

> set abc =   123
> echo "'${abc}'"
'123'

Let’s create a script to demonstrate some of the use cases.  Let’s call it run.csh:

#!/bin/csh
echo switch:$switch

To invoke this script, we would use (make sure it has executable permissions [chmod +x ./run.csh]):

> ./run.csh
switch: Undefined variable.

As you can see, for CSH an unset variable is an error condition.

Referencing Undefined Variables

BASH

Unset variables in BASH are allowed by default.  Just as you saw above with the run.csh script this is an error condition for CSH. Earlier we ran the run.sh script and that didn’t error even though it had very similar code. Here is that example again:

$ ./run.sh 
switch:

We can actually force BASH to report an error for undefined variables.  This is done using BASH set variables, and to make our scripts behave the same way we would do this:

#!/bin/bash
set -u
echo switch:$switch

Running this script again will produce an error now:

$ ./run.sh 
./run.sh: line 3: switch: unbound variable

CSH

As we already pointed out CSH will error, by default, if an undefined variable is referenced. Here is that example provided again:

> ./run.csh
switch: Undefined variable.

Environment Variables

BASH

From here you can run export to make this variable accessible to the child processes:

$ switch=on
$ export switch
$ ./run.sh
switch:on

$ echo test:$switch
test:on

You can simplify this by declaring, defining, and exporting the variable all in the same line:

$ export switch=off
$ ./run.sh 
switch:off

CSH

In CSH we use the setenv command to make environment variables visible to child processes. Here is an example of that syntax:

> setenv switch=on
> ./run.csh
switch:on

> echo test:$switch
test:on

Per Command Variables

BASH

With BASH we can actually set this variable on a per command line basis:

$ switch=on ./run.sh
switch:on

$ echo test:$switch
test:

Setting a variable on the same line as a command, in BASH, will affect the child process. However, the local shell or script will not be impacted (as shown above with ‘test:’).

Furthermore, we can also run this on multiple lines and see different results than when set on the same line as a command:

$ switch=on
$ ./run.sh
switch:

$ echo test:$switch
test:on

In the script you can see that the variable shows as unset, even though in the parent shell we have set it. This is what is meant by a shell variable.

CSH

CSH doesn’t have as convenient of a way as BASH does for setting a variable on a per command line basis, but you can still do this using the concept of subshells:

> (setenv switch on; ./run.csh)
switch:on

> echo test:$switch
test:

Setting a variable in a subshell will not impact the parent shell or script.

However, we can also run this on multiple lines with different results:

> switch=on
> ./run.sh
switch:

> echo test:$switch
test:on

In the script you can see that the variable shows as unset, even though in the parent shell we have set it. This is what is meant by a ‘local’ variable.

Unset a Variable

BASH

Once a variable is set you can unset and unexport it with the unset command:

$ export switch=off
$ unset switch
$ ./run.sh 
switch:

Even if you set the variable again, it is no longer exported after an unset:

$ export switch=off
$ unset switch
$ switch=on
$ ./run.sh 
switch:

However, it is important to note that once a variable is exported changes to it’s value will impact new child processes:

$ export switch=off
$ ./run.sh 
switch:off
$ switch=on
$ ./run.sh 
switch:on

CSH

Before we talk about how to unset a variable in CSH we should talk about how variables are managed in CSH. The shell variables and the environment variables are handled in completely different ways and these different mechanisms can be said to be independent of each-other. Furthermore, it is only the referencing of these variables that is the same.

What is meant by independent mechanisms? The shell variables can be set and unset independent of environment variables. Once a variable is set you can unset it with the unset or unsetenv command.  Here’s an example:

> setenv switch on
> echo envvar:$switch
envvar:on
> set switch=off
> echo shell:$switch
shell:off
> unset switch
> echo envvar:$switch
envvar:on
> unsetenv switch
> echo $switch
switch: Undefined variable.

appending $(path 123)

direct access

Variable interpretation

Both shells will evaluate non-quoted variable expressions

Here is an example in CSH:

> set test = "value"
> echo $test
value

Here is an equivalent example is BASH:

$ test="value"
$ echo $test
value

Both shells will not evaluate single-quoted variable expressions

Here is an example in CSH:

> set test = "value"
> echo '$test'
$test

Here is an equivalent example is BASH:

$ test="value"
$ echo '$test'
$test

Both shells will evaluate double-quoted variable expressions

Here is an example in CSH:

> set test = "value"
> echo "$test"
value

Here is an equivalent example is BASH:

$ test="value"
$ echo "$test"
value

Things start to differ when escaping the dollar sign ($) in a double quotes use case. Here is an example script, test.sh, script showing how to escape variable evaluation in BASH:

#!/bin/bash
#test
check="test"
grep "${check}$" $0

This script should search (using grep) this file ($0 is this file) for lines that end with “test” (in regex $ signifies the end-of-line), and this does work as expected. Running this script finds the ‘#test’ line as expected:

$ ./test.sh
#test

However, there are some nuances with CSH. For example, this script (test.csh), looks like it should work:

#!/bin/csh
#test
set check = "test"
grep "${check}$" $0

This actually produces an error:

> ./test.csh
Variable name must contain alphanumeric characters.

In CSH this should actually be executed as (quotes are removed from grep):

#!/bin/csh
#test
set check = "test"
grep ${check}$ $0

This will work as expected:

> ./test.csh
#test

It is nearly impossible to escape a dollar sign ($) inside of double quotes when using CSH; However, single quotes will prevent expansion, or just using variables without quotes enables escape sequences to solve the problem.

Syntax checking in BASH

Syntax can be checked in BASH with:

bash -n script.sh

This syntax checking process is nice as it does not execute any of the commands, and just checks the entire script for syntax.  This is a quick and easy way to validate all logical branches in a script, for syntax. If you forget to add a semicolon after a square bracket in an if statement, such as this test.sh file:

#/bin/bash
one=1
if [ $one ] then
  echo test
fi

Then the following syntax error will be reported:

./test.sh: line 5: syntax error near unexpected token `fi'
./test.sh: line 5: `fi'

However, a simple syntax error such as adding spaces around the equal sign in a variable assignment will not be caught:

#/bin/bash
one = 1
if [ $one ]; then
  echo test
fi

This is because the syntax checker doesn’t considered this a syntax error. The BASH syntax checker will think that one is a command.  This script will error at run-time with the following error:

./test.sh: line 2: one: command not found

Bottom-line: The syntax checker in BASH is nice and should be used to check syntax of BASH scripts, but be aware that the syntax checker will not catch everything.  Unfortunately, CSH doesn’t have an equivalent.

Startup scripts

BASH

The following scripts are used in BASH…

Login

  • /etc/profile (system)
  • /etc/bash.bashrc (system)
  • ~/.bash_profile
  • ~/.bash_login
  • ~/.profile
  • ~/.bashrc

Opening a shell

  • ~/.bashrc

Running a script

  • /etc/bash.bashrc (system)

CSH

The following scripts are used in CSH…

Login

  • /etc/CSH.cshrc (system)
  • /etc/CSH.login (system)
  • ~/.tcshrc
  • ~/.cshrc
  • ~/.history
  • ~/.login
  • ~/.cshdirs

Opening a shell

  • /etc/CSH.cshrc (system)
  • ~/.tcshrc
  • ~/.cshrc

Running a script

  • ~/.cshrc

Prevent loading the .cshrc file

The .cshrc file is a file where you can specify CSH commands to be executed every-time you load CSH. However, sometimes you want default CSH (i.e. that does not have your custom commands run) and this can be done with the -f switch.  This can, and should, be set in all CSH scripts as it helps to ensure that you are not depending on settings in your current environment:

#!/bin/csh -f
env

This script will show you what your environment variables looks like without your .cshrc file being sourced. You can also use the -f switch when opening a new shell:

> csh -f

Prevent loading the .bashrc and .bash_profile files

The .bashrc and .bash_profile files are places where you can specify custom commands to be executed every-time you start a BASH shell or login, respectively. However, sometimes you want default BASH (i.e. that does not have your custom commands run) and this can be done with the –norc and –noprofile switches.  These arguments can only be specified on the command line:

$ bash --norc --noprofile

Running with a clean environment

Have you ever been stuck seeing strange behavior that nobody else on your project is seeing? Have a sneaking suspicion that your environment is messed up? In swoops the ‘env’ command to save the day!

Running with ‘env -i <command>’ will prevent your environment variables from being inherited from child processes. When used in conjunction with switches to prevent shell scripts from running, you can be pretty confident that you have a clean environment.

BASH

To run clean:

$ env -i bash --norc --noprofile

CSH

To run clean:

> env -i csh -f

PATH and path

Both BASH and CSH shells have and use the $PATH environment variable. However, CSH has a variable called $path which can be manipulated in a slightly different way. Generally the use case of the PATH environment variable is so that applications can find executables and this is done by adding executable directories to the PATH environment variable. CSH gives us a direct and convenient way to do this:

set path = ($path /usr/local/bin)

The lower case path variable enables direct access indexes:

> echo $path[1]
/home/pi/bin

Redirects catching just STDERR, STDOUT, or both

BASH

Let’s start with a script that prints a message to both STDERR and STDOUT:

#!/bin/bash
echo "message intended for stdout"
(&amp;amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;amp;amp;2 echo "message intended for stderr")

Now if we call this script this is what we’ll see:

$ ./std.sh 
message intended for stdout
message intended for stderr

In BASH to redirect STDOUT only:

$ ./std.sh > out.log
message intended for stderr
$ cat out.log
message intended for stdout

In BASH to redirect STDERR only:

$ ./std.sh 2> err.log
message intended for stdout
$ cat err.log
message intended for stderr

In BASH to redirect both:

$ ./std.sh >& std.log
$ cat std.log
message intended for stdout
message intended for stderr

In BASH to redirect STDOUT to one file and STDERR to another:

$ ( ./std.sh 2> err.log ) > out.log
$ cat err.log
message intended for stderr
$ cat out.log
message intended for stdout

CSH

We will use the same script provided above (BASH) that prints a message to both STDERR and STDOUT, but we will invoke this script from CSH and manipulate the STDERR and STDOUT streams:

#!/bin/csh
echo "message intended for stdout"
(>&2 echo "message intended for stderr")

Now if we call this script this is what we’ll see (remember we are in CSH):

> ./std.sh
message intended for stdout
message intended for stderr

In CSH to redirect STDOUT only:

> ./std.sh > out.log
message intended for stderr
> cat out.log
message intended for stdout

In CSH to redirect STDERR only:

$ ( ./std.sh > /dev/tty ) > & err.log
message intended for stdout
$ cat err.log
message intended for stderr

In CSH to redirect both:

$ ./std.sh > & std.log
$ cat std.log
message intended for stdout
message intended for stderr

In CSH to redirect STDOUT to one file and STDERR to another:

$ ( ./std.sh > out.log ) > & err.log
$ cat err.log
message intended for stderr
$ cat out.log
message intended for stdout

Double Layered Firewall – A new approach to home internet security in uncertain times

TL;DR

Use two routers:

  1. For IoT, Phones, Consoles, etc. Enable UPnP on this router so all of these devices will work as expected and so they can open the ports they need. Enable WiFi on this router.
  2. For Desktop computers, network storage devices, media services, etc. Disable UPnP on this router so that you know what ports are open. Disable WiFi on this router.

Connect the routers as: Internet <-> IoT (UPnP) Router <-> Router with Controlled Ports

Why

The current state of security in software and internet services is not quite up to par, I’d even go as far as to say it is broken.

I purchased a Foscam security camera a while back and after having it up and running in my house for a month or two I decided to check to see if there were any hacks for this camera, and I was pretty upset to find out that a user could just go to <my_ip_address>/proc/kcore and get a complete dump of the filesystem, including non-encrypted versions of my home network wifi password, and the user name and password to login and control the camera. To learn more see here:

http://foscam.us/forum/fix-for-the-path-traversal-vulnerability-on-older-devices-t4805.html

After this I changed my home security setup, and I wanted to share that with you as it is simple, and I bet you have an old router laying around that you could put to good use 🙂

Resolving Git Errors

The Basic Steps

TL;DR – If you are stuck trying to debug git, then here is a step by step process I use to figure out the problems:

  1. Check permissions of where you are trying to copy
  2. Ping the server with the repository
  3. Try a different protocol (git[ssh] <-> https <-> http)
  4. GIT_TRACE=2
  5. GIT_SSH_COMMAND="ssh -v"
  6. GIT_CURL_VERBOSE=1

Protocols

When cloning repositories from git you can use a lot of different syntax. You can clone repositories that are local by providing a local path instead of a network path, and you can use different protocols.

Note: The Git protocol doesn’t use authentication and is generally used for pulling only, and is paired with an HTTPS/SSH protocol for those pushing. The Git transport protocol uses port 9418.

Most IT teams will already be blocking SSH (port 22), and thus just switching to HTTPS (port 443) or HTTP (port 80) will work, as these ports are generally open for web services. If you are using a non-public repository then HTTP will not work.

For more information on Git Protocols see:
https://git-scm.com/book/en/v2/Git-on-the-Server-The-Protocols

Timeouts

Timeouts can occur due to several reasons, some of these are servers being down, missing repositories, overloaded servers, permissions, etc.

A timeout looks like this:

$ git clone git@github.com:user/repo.git
Cloning into 'repo'...
ssh: connect to host github.com port 22: Connection timed out
fatal: Could not read from remote repository.

Please make sure you have the correct access rights and the repository exists.

From here you can set the variable GIT_TRACE to debug:

$ GIT_TRACE=2 git clone git@github.com:user/repo.git
trace: built-in: git 'clone' 'git@github.com:user/repo.git'
Cloning into 'repo'...
trace: run_command: 'ssh' 'git@github.com' 'git-upload-pack '\''user/repo.git'\'''
ssh: connect to host github.com port 22: Connection timed out
fatal: Could not read from remote repository.

Please make sure you have the correct access rights and the repository exists.

This doesn’t give us much, but it makes it clear that we are using the ssh protocol and trying to communicate on port 22.

Unable to connect

Unable to connect errors can also occur which look like:

$ git clone git://github.com/user/repo.git
Cloning into 'repo'...
fatal: unable to connect to github.com:
github.com[0: 192.168.1.100]: errno=Connection timed out

Then this is telling you that you couldn’t communicate with the Git server.  Make sure you can ping the server (ping github.com).  If ping works, then perhaps your ports are not open or you have UPNP disabled on your router (which you should have).

Open any ports you need for protocols you are using – if possible.  If you are at work or for some reason cannot forward/open a port, then go ahead and try using the https:// protocol:

git clone https://github.com/user/repo.git

If you are seeing other problems, then try running:

export GIT_SSH_COMMAND="ssh -v"
git clone git://github.com/user/repo.git

This will change the Git ssh command to: ssh -v and it will print verbose messages about the connection (this is just ssh verbose mode).

If you are still stuck, then one of the best things you can do is to enable git debugging:

GIT_CURL_VERBOSE=1 git clone git://github.com/user/repo.git
GIT_TRACE=2 git clone git://github.com/user/repo.git

Proxies

Another problem several people have is trying to make git work through a proxy.  One way to do this is to tunnel SSH traffic through a proxy (e.g. proxy host is ‘proxy’ and port is ‘1080’):

# ~/.ssh/config
ProxyCommand /usr/bin/nc -X 5 -x proxy:1080 %h %p

Note: Your IT team will probably be able to see all of your SSH activity. You should certainly make sure that you have permission to do this from your IT team before you attempt it.

Resources

GitHub Help: https://help.github.com/articles/https-cloning-errors/

New Feature Documentation

I’ve implemented a lot of features and had to interface with several cross-functional teams who were all balancing competing priorities which didn’t always align. This can be very challenging when management is asking for updates and you are stuck waiting for someone.

After iterating several times I’ve found a strategy that works well for me and I will provide you with an example below. Let’s say that the new feature is to add a Flak cannon to our Robot.

Here is the way I document new features, I hope this helps you:

Robot Wars 2.0 – Flak Cannon

Content

  • Scope
  • Goals
  • Tasks
  • Notes
    • Fixed Problems
    • Benign Problems
  • Overview
  • References

Scope

Several new weapons need to be added to facilitate player progression into new content that will have more life than previous releases.

The Flak Cannon is a shrapnel firing weapon that has an Area-of-Effect.

Goals

  1. Add Flak Cannon Object so that others can start parallel development work
  2. Make Flak Cannon fireable
  3. Balance stats for Flak Cannon
  4. Have 95% coverage for Flak Cannon

Tasks

ID Priority Task Owner Target Delivery Date Status Comments
1. Create Flak Cannon Object
1a 1 Create the Object  Alice  1/2/2101  [started]  The objects exist and have been checked-in, but the ability to add a graphic is still missing.
1b 1 Add to the WeaponFactory Alice  1/4/2101  [blocked by 1a]
2. Create Flak Cannon Assets
2a 1 Create the Cannon Firing Animation  John  1/3/2101 [done]  The animation glitches at the end. John is investigating, see CR 101
2b 1 Create the Firing Sound  Bob  1/3/2101  [not started]

Notes

  1. While creating the Flak Cannon we found that the Cannon Ball weapon has a lot of similarities and a new parent class called OURCannon.cxx was created

Fixed Problems

AoE Calculations

There were problems with adapting the AoE Calculations to this new weapon, we had to add an entirely new weapon pattern called TightConePattern.

Benign Problems

Recoil

The animation of the cannon looks a little funny with recoil, but we ended up removing recoil from this weapon and did not need to debug, but we did notice that the animation time of this weapon is different than all others, which may be leading to this problem.

Overview

The new Flak Cannon that we added in release 2.0 can be disabled by setting arena flag:

DisableWeapons = [“FlakCannon”]

The damage of this weapon can be altered using:

unsigned int damage = 20;
setWeaponDamage(“FlakCannon”, damage);

References

The documentation for the WeaponFactory can be found here: WeaponFactory.html

When to throw and when not to throw?

… that is the question. Not really, the real question is:

When should I throw and when should I return?

Existing Systems

If you are already working in an infrastructure of a large system, then you are at the mercy of that system… unless of course you are a System Architect and you are expected to change this 😉

When working in an existing system the basic rules are:

  1. If you can throw, stop the current users request, and safely return back to user prompting when throwing, then you should throw.
  2. If you are not guaranteed that a throw will notify the user of the problem and safely return them to a prompt, then you need to return so that you are providing this feedback to the user before the system takes over.

Most of the debate on this topic stems from system design, and if you are laying out a new Framework, then what is best?

Checked Exception Specifications

One major consideration I use when making this decision is:

When using the design model of “Return Type Indicates Failure”. Then the architecture of the system should handle ALL code, which will throw an exception, at the lowest low-level calls. Without this, you run the risk of missing a throw, which stands the risk of bringing the application down.

Albeit, that is the very purpose of an exception – an unknown condition occurred and we want the program to stop before it runs awry and maybe does something worse than just exiting. Sometimes this is good – to just exit. Most of the time though, we are expected to implement system recovery, and we shall not let the application die. Programs will throw, they always have, and they always will.

In languages like Java, methods that throw checked exceptions are very visible and you must either handle the exceptions from calls to these methods, or propagate the exception upwards.  You can of course use unchecked exceptions in Java (i.e. Runtime Exceptions or Errors) which will also bring the application down.

In C++ you have the option of specifying the ‘throw’ as an Exception Specification, but this has several limitations and doesn’t really serve to solve any of the problems you might want to resolve by using it.

New System Architecture

Finally, we move into the meat of the discussion that you probably care about.

The basics are this, any good system architecture will be able to handle failures and recover from them. This includes exceptions, even if you are designing a system that will use return by failures.  To design a system like this you must design in modules, who’s bounds are well establish so that in case of catastrophic failure, you can help the system to recover and understand what data can and what data cannot be trusted.

Any language worth developing in would make the basic guarantee that when an exception is thrown, and as the stack starts to unravel, destructors are called, things are cleaned as best the programmer instructed, and that no leaks occur. This may mean that some objects are in an unstable state, but that those objects should be able to be destroyed, or even used – even if their state is not predictable.

With this understanding we can make a pretty solid design principle, which would state something like:

A well designed system architecture, one whose modules are perfectly decoupled, and one which uses RAII as it should, would lead to modules being the perfect boundaries at which exception handling can be done to guaranteed system recovery in the face of catastrophic failure.

All this really means is that a caller should not know about the inter-workings of sub-modules (i.e. decoupling). If this practice is followed, and the modules do a good job of cleaning up after themselves, then if anywhere in that module an unrecoverable exception occurs, the caller can catch that exception, understand the full scope of the failure, handle it gracefully, and continue on without being concerned that it’s stated or that it’s children have been left in an unpredictable state.

Note: Modules in this context are arbitrary and used to communicate the conceptual boundaries between parts of a systems design. 

To extrapolate on this a little, and to drill at the answer you are looking for, a well designed system must use exception handling at the boundaries. Within the boundaries I’d say do what makes sense, sometimes using return types to make decisions about failures makes life easier and sometimes throwing and giving control back to the user as soon as is possible makes life easier.

Do understand that throwing exceptions should never be part of a design practice as they are very costly from a performance perspective and they rely on developers doing a good job of cleaning up, but in case of unexpected conditions, they sometimes are the only call that makes sense.