bash shortcuts

tl;dr: bind common commands to shortcuts. save time. love life.

cost: $0 or $29 one time with Alfred Powerpack

read time: 15 minutes


Below, I overview the bash snippets I regularly use, and how you can easily invoke them with shortcodes

table of contents:


how to implement

I've bound my most commonly used bash snippets to shortcodes to use them more effectively. I prefer using Alfred to implement the snippet text expansion, as they will work in any field that accepts text. A downloadable .alfredsnippets file that includes all the below snippets can be found here

If you only want this functionality in your terminal (for free), you can add them as aliases in your bash_profile file instead (example below ā¬‡ļø)

# Add a environment key:val pair
echo 'export ENV_VAR=specific value' >> ~/.bash_profile
# Add a bash snippet alias
echo 'alias ;cd="cd ~/Dropbox/projects/specific/folder/pathing"\n' >> ~/.bash_profile
# Make sure to update your terminal session after
source ~/.bash_profile

(you can check the bash_profile at any time with)

cat ~/.bash_profile

Mac users can alternately use Settings -> Keyboards -> Shortcuts to bind shortcodes (Guide)


general

;genkey - generate a random string of characters (e.g. for a password)

LC_ALL=C tr -dc 'A-Za-z0-9' </dev/random | head -c 40 ; echo

;folders - map the current folder structure

ls -R | grep "^[.]/" | sed -e "s/:$//" -e "s/[^\/]*\//--/g" -e "s/^/   |/"

;filesgrep - grep the contents of the files in the current folder, recursively

find ./ -type f -print0 | xargs -0 grep "foo"

;history - export your bash shell history to .txt file

history > history_for_print.txt

;exit - exit Vim šŸ˜›

:wq


virtualenv / dependencies

;venv - general a virtual environment (you can alternately use pipenv shell)

virtualenv venv && source venv/bin/activate

;pipv - install a specific version of a dependency (change pyyaml)

pip install --force-reinstall pyyaml==3.12

;listlibs - get a list of every installed library, ranked by size (SO)

pip3 list --format freeze|awk -F = {'print $1'}| xargs pip3 show | grep -E 'Location:|Name:' | cut -d ' ' -f 2 | paste -d ' ' - - | awk '{print $2 "/" tolower($1)}' | xargs du -sh 2> /dev/null|sort -h

;countpy - count Python Lines of Code (LoC)

wc -l $(git ls-files | grep '.*\.py')

;delpyc - delete cached python files (.pyc)

find . -name "*.pyc" -exec rm -rf {} \;

;format - reformat the local folder's contents with flake8 and black

black ./
flake8 ./

;timestamp - print the current time as a POSIX timestamp

python3
from datetime import datetime
print(int(datetime.now().timestamp()))

;jupyter - cd into jupyter folder and start jupyter notebook (change documents/swe/jupyter)

cd documents/swe/jupyter && jupyter notebook


github

one line git push (SO USEFUL) - (GH origin). Open your bash_profile

open -e ~/.bash_profile

and paste in this function:

function gacp() {
    git add .
    git commit -a -m "$*"
    git push origin master
}

(update your current session with source) then you can quickly and easily commit and push with

gacp testing one-line commit-push

;debugpush - push an empty Git commit (for debugging, redeploying, etc):

git add .
git commit --allow-empty -m "Intentionally empty debug push"
git push origin master

;gpr - push your local branch to a remote branch and prepare a pull request (change NEW_BRANCH_NAME)

git push origin master:NEW_BRANCH_NAME
# alternately, to any repo to which you have PR rights
git push git@github.com:USERNAME/REPO.git master:NEW_BRANCH_NAME

;greset - Go back one commit. (Docs on hard vs soft)

git reset --soft HEAD~1

;ginit - create a new repo and attach to a remote one

git init && git add . && git commit -m "initial commit" && git remote add origin git@github.com:alecbw/repo-name.git

;gitignore - pull the standard python .gitignore to your local directory

curl -sL https://www.gitignore.io/api/python > .gitignore
# optionally exclude spreadsheets and procedural config
curl -sL https://www.gitignore.io/api/python > .gitignore && echo $'\n# Spreadsheets\n*.csv\n*.xlsx\n*.xls\n*.numbers\n*.ods\n\n# Misc Config\n*.pyc\n*.DS_Store' >> .gitignore
# optionally add things with
open -e .gitignore

;gauth - SSH to GitHub

ssh -vT git@github.com


serverless.com

;slstd - run local tests and deploy Serverless stack (change serverless-1.yml and tests.test_handlers pathing)

python3 -m tests_folder.test_file && sls deploy --conceal --config "specific-serverless.yml" --stage prod

;slsgd - commit changes and deploy Serverless stack (change serverless-1.yml)

git add . && git commit -m "Automatic commit with sls deploy" && sls deploy --conceal --config "specific-serverless.yml" --stage prod

Note: you'll want to move this to a push to deploy model as your stack matures. I've enjoyed using CircleCI for that.

;slsil - locally test a given Lambda with hardcoded data (change the function and data and make as many variants as you want)

sls invoke local -f gsheet-read -d '{"Gsheet":"1RrgpAQx0Dz5gi-FOQVcictoTI5OZDoSdUvVNCteZSX4", "Tab":"Sheet1"}'


AWS

;awsaccount - set the Account ID as an env var

export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)

;ec2desc - (change PROFILE)

aws ec2 describe-instances --profile PROFILE

;dydesc - get size, description of Dynamo table (change tableName) (case sensitive)

aws dynamodb describe-table --table-name tableName

;s3desc - get size, description of S3 bucket (change bucket-name)

aws s3api list-objects --output json --query "[sum(Contents[].Size), length(Contents[])]" --bucket bucket-name

(the above query iterates through every item in the bucket. If you have >= millions, use this Cloudwatch query to get S3 bucket size instead) (change bucket-name)

now=$(date +%s) && aws cloudwatch get-metric-statistics --namespace AWS/S3 --start-time "$(echo "$now - 86400" | bc)" --end-time "$now" --period 86400 --statistics Average --region us-west-1 --metric-name BucketSizeBytes --dimensions Name=StorageType,Value=StandardStorage Name=BucketName,Value=bucket-name

;s3sync - download a S3 bucket's contents (change path/to/bucket-name)

aws s3 sync s3://path/to/bucket-name .

;s3ttl - check the Lifecycle Policies of a S3 bucket (change bucket-name)

aws s3api get-bucket-lifecycle-configuration  --bucket bucket-name

;s3newbucket - create a new bucket (change bucket-name)

aws s3api create-bucket --region us-west-1 --create-bucket-configuration LocationConstraint=us-west-1 --bucket bucket-name

;checkstack - checks the status of your CloudFormation stack (in case it's stuck in Rollback hell) (change YOURSTACK)

aws cloudformation describe-stacks  --query 'Stacks[0].StackStatus' --output text --stack-name YOURSTACK


heroku

;hconfig - show Heroku config, incl env vars

heroku config

;hkill - restart a specific dyno

heroku restart worker.1

;hlog - stream heroku logs (change app-name)

heroku logs -a app-name --tail

;hdb - open a SSL connection to Heroku Postgres database (change app-name)

heroku pg:psql --app app-name

;hlocal - run localhost version of Heroku server (http://0.0.0.0:5000/)

heroku local web

;hmigratedb - run Heroku Django DB migration

heroku run python3 manage.py makemigrations
heroku run python3 manage.py migrate
heroku run python3 manage.py showmigrations


flask

;flrun - (change FlaskApp.py)

export FLASK_APP=FlaskApp.py && FLASK_APP=FlaskApp.py flask run

;fldebug - put Flask into DEBUG.

export FLASK_DEBUG=1


django

;djdebug - put Django into DEBUG. Note: this is not safe for production

export DEBUG_MODE=True && export heroku config:set DEBUG_MODE=True

;djlocal - run localhost version of server

python3 manage.py runserver

;migratedb - create or update db tables. Django will make a corresponding file in boards/migrationsdirectory.

python3 manage.py makemigrations
python3 manage.py migrate
python3 manage.py showmigrations


postgres

;connectdb - connect to a Postgres instances

psql --host=host.zone.aws.amazon.com --port=5432 --username=awsuser --password=foobarbaz --dbname=mypgdb

;tablenames - get a list of table names

\dt

;exportdb - export current table to CSV (change tableName)

\copy tableName TO 'OUTPUT_FILE_NAME.csv' DELIMITER ','

;populateRDSwithS3 - fill an RDS instance with a CSV in a S3 bucket - execute in psql shell (More here

CREATE TABLE t1 (col1 varchar(80), col2 varchar(80), col3 varchar(80))
CREATE EXTENSION aws_s3 CASCADE
SELECT aws_commons.create_s3_uri(
    'sample_s3_bucket',
    'sample.csv',
    'us-west-1'
) AS s3_uri \gset

# Import the Amazon S3 data by calling the aws_s3.table_import_from_s3 function. You can set up IAM or just use your default creds
SELECT aws_s3.table_import_from_s3(
'table', '', '(format csv)',
:'s3_uri', aws_commons.create_aws_credentials('sample_access_key', 'sample_secret_key','')
);


docker

;dcheck - check if the Docker daemon is running locally

docker_state=$(docker info >/dev/null 2>&1)
if [[ $? -ne 0 ]]; then     echo -e "This requires Docker. Make sure to enable the Docker daemon first"; fi

;dcompup - docker-compose up and build

docker-compose up --build -d

;dinspect - inspect the last run container

docker inspect $(docker ps -l -q)

;dprune - prune all unused images, containers, and volumes

docker rmi $(docker images -a --filter=dangling=true -q)
docker rm $(docker ps -qa --no-trunc --filter "status=exited")
docker volume rm $(docker volume ls -qf dangling=true)


CSVs

;catcsv - concatenate multiple CSVs in local folder. A more advanced version with support for mismatched columns can be found here

cat *.csv > combined.csv

;countcsv - count the lines of a CSV without opening. Very helpful if it's above the million or so row limit (change your_file.csv)

cat your_file.csv  | wc -l

;headcsv - get the first n rows of a CSV w/o opening it (tail works for the last n, as well)

head -n 5 your_file.csv

;addhead - add a header to a CSV without opening it

echo $'foo,bar,baz\n' > header.txt
cat header.txt important.csv > new_important.csv

;removehead - remove the first row of a CSV without opening it (change your_file.csv). This mutates it in place.

sed -i '' 1d your_file.csv

;overviewcsvs - open a Python shell, get every CSV and XLSX in the local directory, and print its name, row/col count, and column names

python3
import pandas as pd; import os; files = [f for f in os.listdir('.') if (os.path.isfile(f) and os.path.getsize(f) != 0 and any(x for x in [".csv", ".xlsx"] if x in f))]; print(files); df_tuples = [(f, pd.read_csv(f)) for f in files]; [print(df_tup[0], df_tup[1].shape, df_tup[1].columns, "\n") for df_tup in df_tuples]

misc

;whois - generic WHOIS (change url.com)

whois $(dig +short url.com | head -1)

;alltxt - change the filetype (by appending) every file in a folder (SO source)

find . -type f -exec mv '{}' '{}'.txt \;

;countfiles - count the number of files in a directory

ls -1 | wc -l

;findhere - recursively search the current dir for a string pattern

grep -rnwi . -e # needs pattern afterwards

;pyos - quickly setup and print local env vars and files in a Python shell

python3
import os; print('\n'); print(os.environ); print('\n'); files = [f for f in os.listdir('.') if (os.path.isfile(f) and os.path.getsize(f) != 0)]; print(files)


but wait, there's more

I figured I'd include the Terminal extensions I use outside of expanding snippets:

Starship - better GUI for the terminal, w/ current dir, AWS region, and execution times

open -e ~/.config/starship.toml

Tree - print out the tree path of your current folder - brew install tree

Tree .

Glances - get available info about CPU, memory usage, etc - pip install glances

glances

ctop - like top or glances for Docker Containers - brew install ctop

ctop

Lazydocker - a more full fledged UI version of ctop - brew install lazydocker

lazydocker

httpie - better curl - brew install httpie

http https://httpie.org/hello

tldr - better man - brew install tldr

tldr pip

exa - better ls - brew install exa

# first add to bas_profile
alias ls=exa
alias lse="exa -lā€


Thank you to Angel for contributing snippets!

Thanks for reading. Questions or comments? šŸ‘‰šŸ» alec@contextify.io