The Definitive Guide to Cryptographic Hash Functions (Part II)

Last time I talked about how cryptographic hash functions are used to scramble passwords.  I also stressed why it is extremely important to not be able to take a hash...
2 min read
Last updated January 17, 2023

Last time I talked about how cryptographic hash functions are used to scramble passwords.  I also stressed why it is extremely important to not be able to take a hash value and work backwards to figure out the plain text input.   That was Golden Rule #1 (pre-image resistance).

But if hashes can’t be reversed, why do we always hear about passwords being cracked?  And why the heck are people always telling us to create really complex, hard-to-remember passwords?

Does Password Size Really Matter?

In Part I, you saw that both “dog” and “the eagle flies at midnight” generated MD5 hash values of the same exact length.  What’s more, the hashes are equally hard to reverse.  So what makes weak passwords weak? Answer: Brute force attacks.

Brute Force Attacks

Instead of reversing the hash of your password, I can simply keep trying different inputs
until I guess one that generates a hash that matches yours.  (Remember: the hashing algorithms are public). This is called a brute force attack and it can be very effective at cracking weak passwords.  (In fact, thanks to my spotty memory, I brute force my 4 digit garage door code almost every day.)

A weak password that is just 3 lowercase alpha characters (e.g., “dog”) requires a maximum of 17,576 times to generate a match.   An attacker can further reduce the number of guesses by limiting it the “guesses” to the most likely candidates, like 3 character words that exist in the dictionary (try “dog” but don’t try “fgz”).  This variation is unsurprisingly called a dictionary attack.

In contrast, if a password is 8 case-sensitive alpha-numeric characters (e.g., “d0G5Fr0g”), an attacker has to guess potentially 218,340,105,584,896 times.  No thanks!

Rainbow Tables

Generating billions of password hashes can be time-consuming and computationally expensive.  As a result, crackers sometimes use rainbow tables – gigantic, pre-computed tables of hash values for every possible combination of characters—to speed up the cracking process.

Rainbow tables take a really long time to generate, but once they’re available (e.g. at freerainbowtables.com), they can help attackers find a match for a given hash in seconds versus hours, days, or months if they have to compute all the hashes themselves.

It should be obvious by now that the more complex your password, the less likely its hash will be in a rainbow table.  Some of the most effective rainbow tables available are ones that contain hashes of common dictionary words, so never, ever use dictionary words as your password!

So, given that brute force attacks and rainbow tables exist, aren’t we all vulnerable?  Fear not, my friends.  Part III will feature a rather tasty solution (salt).

What should I do now?

Below are three ways you can continue your journey to reduce data risk at your company:

1

Schedule a demo with us to see Varonis in action. We'll personalize the session to your org's data security needs and answer any questions.

2

See a sample of our Data Risk Assessment and learn the risks that could be lingering in your environment. Varonis' DRA is completely free and offers a clear path to automated remediation.

3

Follow us on LinkedIn, YouTube, and X (Twitter) for bite-sized insights on all things data security, including DSPM, threat detection, AI security, and more.

Try Varonis free.

Get a detailed data risk report based on your company’s data.
Deploys in minutes.

Keep reading

Varonis tackles hundreds of use cases, making it the ultimate platform to stop data breaches and ensure compliance.

a-guide-to-ai-data-security:-why-it-matters-and-how-to-get-it-right
A Guide to AI Data Security: Why it Matters and How to Get it Right
Learn what AI data security really means, why it matters and how to protect sensitive data used by or exposed to AI systems and workflows.
ai-model-poisoning:-what-you-need-to-know
AI Model Poisoning: What You Need to Know
Explore the growing threat of model poisoning, a cyberattack where machine learning models are manipulated, and how your organization can defend against it.
introducing-varonis-for-chatgpt-enterprise
Introducing Varonis for ChatGPT Enterprise
Varonis' industry-leading Data Security Platform supports OpenAI’s ChatGPT Enterprise, keeping data safe against risks of AI misuse and exposure.
a-user-always-finds-a-way:-the-federal-security-dilemma
A User Always Finds a Way: The Federal Security Dilemma
Our experts share how the road to data loss is usually paved with good intentions, and strategies for federal agencies to combat unintended mistakes.