password

How I cracked 30 staff accounts during my lunchbreak

The incident

A few weeks back at work we had an isolated incident in my where a password was inadvertently exposed by a staff member, luckily it was only exposed internally and to a limited number of staff so there was no real harm done, apart from some slight embarrassment for the person who’s password we saw. A password reset fixed the issue-at-hand. However, It got me wondering about password security.

At $dayjob we store passwords in a directory system called LDAP, it serves as a database for all user accounts, groups and a variety of other directory related things, when staff join the organisation we add them to the database and then the newbies can log into our various systems. When a user changes their password, something we ask them to do when they first log on, the password is immediately hashed and then stored in the LDAP database. What this means is that the password is never exposed to HR, IT or any other staff. If we did look into the LDAP database all we can see is a string of random characters, something like: aHR0cDovL2JpdC5seS8ydk0wVWIyIGZvciBtb3JlIGluZm8K

The reason why i’m explaining this is because when the password was exposed by the unknowing staff member, I actually got to see a plaintext password and I noticed that the password was a dictionary word with two numbers on the end of it, something like: Jacket01. Now although this technically is an alpha-numeric password with capitalisation, it doesn’t take too much computing power (or guessing) to crack the password hash if it ever were exposed, in fact using a dictionary word with 3 random characters at the end of the word takes 9 seconds using my Mac with no GPU processing.

The idea

I then became a little curious and decided to setup a test of our password security (well, our staff passwords). I took all of the password hashes we have in LDAP, found a dictionary wordlist I had lying around and ran a password cracking tool called hashcat over all of the the hashes. I wanted to see how good (or bad) we were at having secure passwords. Out of about 440 hashes that I pulled from our LDAP tree I was able to discover 30 or so passwords which are too weak to consider safe, and needed changing. (That happened a few weeks ago).

The test I ran was using a dictionary wordlist with some variation rules applied, it does things like adds capitalisation on letters, numbers to the start &/or end of the word and tries things like replacing letters with numbers, for example: the letter o is replaced with a numeric 0 so the word ‘password’ becomes ‘passw0rd’. This means that for each word in the wordlist i’ve used, i’m able to test a number of variations, each as separate passwords attempts. This means that from a wordlist of about 16,000 words I’m able to create 480,000 individual passwords. If i was to use a larger wordlist (easily done) or use a longer list of variations (probably not necessary), I could  likely guess more passwords.

This is why having strong password entropy is important.

What makes a good password?

Having good password entropy (password strength) makes a good password. It doesn’t just mean having an alphanumeric password, it means having a password that is difficult to predict and would require a lot of computing power to brute-force. This means making it long enough and unpredictable enough to guess with a basic wordlist.

However, unless you use a password manager, and even then the password you use to unlock your password manager needs to be rememberable enough for you to recall it. A useful tip when trying to create a password that is memerable for you, yet difficult to crack is to use multiple words that you can remember, string them together and then add some capitalisation and replace some characters with numbers and special characters. Check out the below graphic, courtesy of kxcd.

Image from KXCD

correct horse battery staple

There’s a neat website which can generate passwords for you using the method above: https://www.xkpasswd.net


 Setting up the test

So, how did I actually crack 30 passwords? Well, you probably don’t need to go to the lengths I did however, I wanted results in minutes/hours rather than days/weeks, so I rushed it and took advantage of AWS’s EC2, in particular the GPU instances.

Note: everything in this section assumes you kind of know what you’re doing and if you get stuck, you’re capable of searching stackoverflow 🙂 It also assumes you have a list of hashes you want to crack and a dictionary wordlist used to seed hashcat.

1 – Spin up the VM

Login to your AWS console and spin up a g2.8xlarge GPU instance, Make sure you specify Ubuntu 16.04 LTS (HVM). You don’t need any large disk space, so just accept the defaults. Make sure that your security group allows you access to the VM via SSH and make sure you’re giving it a public/elastic IP.

Grab the public IP address of your VM and ssh into is as the ubuntu user:

$ ssh [email protected]

2 – Setup drivers to get maximum performance

You could totally skip this, but if you’re paying $3/hour for a VM then it makes sense to spend 1 minute getting the maximum performance out of the GPUs.

$ sudo add-apt-repository ppa:graphics-drivers
$ apt-get update && apt-get install nvidia-opencl-dev nvidia-cuda-dev p7zip-full linux-image-extra-virtual nvidia-opencl-dev nvidia-cuda-dev nvidia-370
$ apt-mark hold nvidia-370
$ echo options nouveau modeset=0 | sudo tee -a /etc/modprobe.d/nouveau-kms.conf

Now you want to open up /etc/modprobe.d/blacklist-nouveau.conf with your favorite text editor and make sure it looks like:

blacklist nouveau
blacklist lbm-nouveau
options nouveau modeset=0
alias nouveau off
alias lbm-nouveau off

Update initramfs and reboot:

$ sudo update-initramfs -u
$ sudo reboot

Check to make sure you’ve loaded the nvidia kernel modules, this should return something:

$ lsmod | grep nvidia

Fine tune the GPUs:

sudo nvidia-smi -pm 1
sudo nvidia-smi -acp 0
sudo nvidia-smi — auto-boost-permission=0
sudo nvidia-smi -ac 2505,875

3 – Install hashcat

Go to: https://hashcat.net/hashcat and download the latest binaries (link at time of writing: https://hashcat.net/files/hashcat-3.6.0.7z)

$ wget https://hashcat.net/files/hashcat-3.6.0.7z
$ 7z x hashcat-3.6.0.7z

So now, prepare your wordlist and your hashes, each file should be a plain text file containing one word or hash per line. (I’m talking about two separate files here). Upload your wordlist and your hashes onto the server, let’s say in your home directory as wordlist.txt and hashes.txt.

If you don’t have a wordlist, it should be pretty easy to find one on any of the torrent websites out there. If you’re desperate, post a comment below and Ill share one with you privately.

So now you’ve got your server, your wordlist and your hashes, along with hashcat ready to go. I’d recommend opening up a screen or tmux session to kick off the cracking process:

$ tmux
$ cd hashcat-3.6.0/
$ ./hashcat64.bin -a 0 ~/hashes.txt -r rules/best64.rule ~/wordslist.txt -w 4

Just a word of warning: You may need to adjust the flags and if all of your hashes are of the same type, you should tell Hashcat by using the -m flag. (eg: -m 500 for MD5CRYPT)

Disberse Saves Charities Money By Using Blockchain To Verify Donations

The Start Network, an amalgam of 42 national and international aid agencies, reported on July 11, 2017, that it is researching blockchain-based models for delivering humanitarian aid with fund management and distribution platform Disberse.

Among the project’s goals are to increase the rapidity of aid distribution and to infallibly trace transactions from donor to recipient. Ultimately, the blockchain technology would act as a monitoring system to ensure those in need receive the funds in question while simultaneously mitigating exchange rate-based losses.

In the current banking systems, high fees and transaction longevity present inefficiencies that can be costly to both the organizations providing aid and the individuals who need it. These issues are complicated by volatile exchange rates in countries wherein economic infrastructure is severely lacking, which is often the case in places marred by humanitarian crises.

Disberse combats the loss in exchange rates and intermediary fees. It completed a pilot program with UK-based charity Positive Women by which it reduced losses at delivery points for a Swaziland aid project to a null. Funds were tracked as they traveled from the UK to four Swazi schools by way of a non-governmental organization; the project’s savings were enough to pay the annual fees for an additional three students.

 

Source: Disberse Saves Charities Money By Using Blockchain To Verify Donations – ETHNews.com

I love bash

Tab-complete ssh hostnames

How much time do you think you’d save if you could tab-complete the hostname every time you fired up ssh?

Put this in your .bash_profile:

SSH_COMPLETE=( $(cut -f1 -d' ' ~/.ssh/known_hosts |\
tr ',' '\n' |\
sort -u |\
grep -e '[:alpha:]') )
complete -o default -W "${SSH_COMPLETE[*]}" ssh

Enjoy 🙂

Two-men-carrying-woman1947

Converting an OVA to an Amazon AMI

If your job is to migrate applications in or out of public and/or private cloud environments, or maybe you have a VM running locally you might need to move into Amazon’s EC2 service, you’re probably going to have to move a VM in it’s entirety between environments..

Anyway, I had the task of trying to figure out how to move a “VMWare based appliance” into a non-vmware environment, being ec2. It was annoying, but I managed to get it working relatively easily.

Everything below assumes that you have:

  • An AWS account
  • An AWS Access Key
  • An AWS Secret Key
  • Permissions to create EC2 instances, volumes, S3 buckets, s3 objects, User roles, role policies.
  • Some idea what you’re doing.

The first thing you want to do is upload the OVA archive into an s3 bucket of your choosing, preferably one that already exists (i’ll explain why later) and making sure that the bucket is in the region that you want to create the initial AMI in.

Whilst that’s uploading, spin up a small (t2.micro is fine) EC2 instance that uses the Amazon Linux AMI, don’t use ubuntu or your favourite distro – just stick to the Amazon AMI, you’re only going to be using this temporarily to convert the OVA to an AMI, and you don’t need anything resource intensive. You want the Amazon flavour because it comes pre-baked with all of the cli tools, and the right versions. (I first tried it with ubuntu, but ran into some issues with older versions and missing command options).

First thing to do when your VM is spun up, ssh to it and run aws configure and follow the prompts that ask for your credentials. Be sure to pick the region you actually want to deploy the new image in.

$ aws configure 
AWS Access Key ID [None]: 1234 
AWS Secret Access Key [None]: 5678 
Default region name [None]: ap-southeast-2 
Default output format [None]:
You can now now test your credentials by listing the ec2 regions:
$ aws ec2 describe-regions
{
"Regions": [
{
"Endpoint": "ec2.eu-central-1.amazonaws.com",
"RegionName": "eu-central-1"
},
{
"Endpoint": "ec2.sa-east-1.amazonaws.com",
"RegionName": "sa-east-1"
},
}
<snip>

Have the s3 bucket name handy that you uploaded the OVA file to.

Now create two files: trust-policy.json & role-policy.json, in the second file you’ll need to replace “$bucketname” with your bucket name.

trust-policy.json:

{
   "Version":"2012-10-17",
   "Statement":[
      {
         "Sid":"",
         "Effect":"Allow",
         "Principal":{
            "Service":"vmie.amazonaws.com"
         },
         "Action":"sts:AssumeRole",
         "Condition":{
            "StringEquals":{
               "sts:ExternalId":"vmimport"
            }
         }
      }
   ]
}

role-policy.json:

{
   "Version":"2012-10-17",
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "s3:ListBucket",
            "s3:GetBucketLocation"
         ],
         "Resource":[
            "arn:aws:s3:::$bucketname"
         ]
      },
      {
         "Effect":"Allow",
         "Action":[
            "s3:GetObject"
         ],
         "Resource":[
            "arn:aws:s3:::$bucketname/*"
         ]
      },
      {
         "Effect":"Allow",
         "Action":[
            "ec2:ModifySnapshotAttribute",
            "ec2:CopySnapshot",
            "ec2:RegisterImage",
            "ec2:Describe*"
         ],
         "Resource":"*"
      }
   ]
}

Now, use the aws cli tools to apply the policies:

$ aws iam create-role --role-name vmimport --assume-role-policy-document file://trust-policy.json
$ aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file://role-policy.json

Ok, you should be able to now start the import of your OVA, copy what I have below, changing the descriptions, bucket name and the file name of the OVA you uploaded:

$ aws ec2 import-image --cli-input-json "{  \"Description\": \”Description of my OVA\", \"DiskContainers\": [ { \"Description\": \”Disk Description\", \"UserBucket\": { \"S3Bucket\": \”bucketname\", \"S3Key\" : \”OVAFILENAME.ova\" } } ]}"

If you didn’t get any errors, you should now be able to watch your import progress by running aws ec2 describe-import-image-tasks:

$ aws ec2 describe-import-image-tasks
{
 "ImportImageTasks": [
 {
 "Status": "completed",
 "LicenseType": "BYOL",
 "Description": "Description of my OVA",
 "ImageId": "ami-d5abc1234",
 "Platform": "Linux",
 "Architecture": "x86_64",
 "SnapshotDetails": [
 {
 "UserBucket": {
 "S3Bucket": "bucketname",
 "S3Key": "OVAFILENAME.ova"
 },
 "SnapshotId": "snap-abc1234",
 "DiskImageSize": 535459840.0,
 "DeviceName": "/dev/sda1",
 "Format": "VMDK"
 }
 ],
 "ImportTaskId": "import-ami-fg4d51t0"
 }
 ]
}

Once that completes (it can take a while) you should be able to launch an EC2 instance, from your AMI. Login to AWS, Goto EC2 -> AMIs -> select your AMI then Launch!

Cheers!