Tag Archives: Security

Protecting Email with DKIM

One of the problems with the email and the protocols used to transfer it (SMTP) is that they were designed long ago when the Internet was a much friendlier place. When SMTP was designed it was assumed that other hosts on the Internet could be trusted. This is particularly visible in the configuration of relays where the sender doesn’t have to be identified. A mail relay will accept mail from any server regardless of where the mail appears to be coming from.

To attempt to rectify this SPF was created. To setup SPF you add either a TXT or an SPF record to the DNS zone you will be sending from. This record defines which servers are allowed to send mail that is coming from that domain. So on my domain danielhall.me I could publish an SPF record that says only my mail server is allowed to send mail that ends in @danielhall.me. Any mailservers receiving mail that is from my domain but not coming from an address listed in my SPF record can see that the mail is likely forged and throw it away. SPF works well in most situations but fails at a very common use case. If someone I send mail to tries to forward it to another address using an automatic process (no clicking forward in their client) then the mail will appear to come from my domain when it gets to the user it was forwarded to, however it will have came from the original recipients mailserver.

DKIM solves this problem by giving each sending mailserver a cryptographic key pair. The public keys is then published in a DNS record in that zone and stores the private key somewhere safe on the server. The server then proceeds to sign the headers (especially the From: header) and the body of all outgoing emails. This signature is then attached to the email as an extra header. When the receiving server get the email it gets the signature and uses that along with the list of signed headers to verify the signature against the public key of the signing domain. This means as long as the mail has passed through an authorised mailserver at any point it will be considered valid.

Setting up a DKIM is relatively simple process. You will need access to the  zone records for your domain and access to the configuration of all the mailservers which all mail originating at your domain passes through. You also need to be aware that signing mail makes it slightly more processor intensive to send an email. If you send a large amount of email this difference could be quite significant. If you’re using sendmail you may be able to alleviate it by switching to a less resource hungry MTA like Exim. You should also note that in some configurations DKIM can not be setup. For example if you use masquerading in sendmail DKIM will always fail as sendmail will modify the from header after signing.

Ultimately DKIM is a good move for the internet community at large, especially when combined with SPF. DKIM mail is assured to come from the sender and can be cryptographically proven so. While it does take more take a little more effort to setup and maintain it assures mail from your domain is secure and can be assured to have come from you or your company. Ultimately DKIM can protect your company against phishing attempts and boost your spam scores.

Random thought: What would Email look like if it were designed today?

Cross-Domain AJAX

When making an xmlhttprequest from a website the browser will restrict you to the site from which the script came. This is a security precaution. If sites were able to tell the browser to make requests from other domains then they would be able to DDOS a site with a users browser. There are legitimate reasons to make requests to other sites though.

Many sites offer web services, xml data and json encoded data. These can provide almost anything from the weather, to search results, to advanced APIs. To use these services from your site using javascript you’ll have to employ one of the methods below.

Signing Javascript

Firefox allows you to sign your Javascript and place it in a jar file. This will give your code more privileges, You can also request these permissions explicitly without having your code signed, but having a dialog box appear for every AJAX request could get very tiring for the user. Another problem with this approach is that it isn’t documented very well and its Firefox specific. The first link in the references section deals with this method.

Access-Control Headers

This is the w3 approved method of allowing a client from another domain to access your web service. It is a server side method and requires no changes on the client to implement. This is both and advantage and a disadvantage. If you have control over the server then this method is simple, otherwise (for sites such as Yahoo API or other public services) you will not be able to implement this. It should also be noted that this was implemented in Firefox 3.5 so it can’t be used with earlier versions, or other browsers.

To use this method you tell your service to output extra headers that tell the browser whether access was allowed or denied.

Flash Enabled xmlhttprequest

This method involves using an invisible flash player to perform the actual request then handing the result back to the Javascript for processing. Flash still performs permission checking by looking for a /crossdomain.xml file in the root directory of the domain the request is being made to. There are several libraries that implement this approach and a few even implement in a way which is compatible with xmlhttprequest. One downside is this Flash is required, though recently Flash is required for several major sites and most browsers will have it installed.

Add Sites To Trusted Zone

Internet Explorer allows and denies cross-domain based xmlhttprequests based on the security setting. This approach is likely not going to be used on the Internet as it requires user interaction and is Internet Explorer specific. On a corporate Intranet this is slightly less difficult but not by much.

Apache mod_proxy

With this method you use the same server you shared the page from to proxy the requests automatically to the server with the data you’re fetching. For this to work your version of Apache has to be compiled with proxy support or you need to have the mod_proxy dso loaded. This method increases the latency of requests as they must first go via your server. It should also be noted that this cannot be implemented in .htaccess file and must be done in the main configuration.

Manual Proxy

If you don’t have control over your servers configuration then you can mimic the above method by writing a script that forwards the variables required and forwards back the data. This approach can even be more preferable than the above method as it allows you to preprocess the variables and cache the data if required.

References

http://www.mozilla.org/projects/security/components/signed-scripts.html

http://dev.w3.org/2006/waf/access-control/

http://developer.yahoo.com/javascript/howto-proxy.html

https://developer.mozilla.org/En/HTTP_Access_Control

http://ejohn.org/blog/cross-site-xmlhttprequest/

http://ajaxpatterns.org/XMLHttpRequest_Call

http://ajaxpatterns.org/Flash-enabled_XHR

Random Thought: Can you use AJAX to make web applications cleaner?

Using EncFS to encrypt your files

About EncFS

EncFS is an encrypted filesystem based on FUSE. It transparently encrypts files stored in it and places them on another volume. This is in contrast to block level encrypted filesystems which transparently encrypt the data under the filesystem layer as it is being written to disk. Think of EncFS as a bind mount, except that the source for the mount is encrypted and the place it is mounted to is the only place it is available unencrypted.

The main advantage of EncFS filesystems is that when backing up only the files which have changed need to be backed up. This means it works perfectly with tools such as rsnapshot. Another advantage is that the filesystem doesn’t need a block of disk allocated to it and will shrink and expand as the files inside change.

Finally because this is all implemented with FUSE it is all done in userspace. No root access is required (apart from setting FUSE up) to create and alter encfs filesystems.

Setting Up an EncFS Volume

So the first thing you need to do to setup an encfs volume is to install FUSE and EncFS. If you don’t have root access you will have to ask your sysadmin to do this for you, otherwise follow your distribution specific method of installing new packages. On Fedora it is called ‘fuse-encfs’ and on Debian/Ubuntu its called ‘encfs’. On some older systems users wishing to use FUSE may need to be added to the correct group.

First you need to decide where you will put the encfs volume, and where you’ll mount it. I usually put mine in /home/daniel/.crypt and mount it to /home/daniel/crypt. But feel free to name it whetever you want. When you’ve decided run the EncFS with those arguments, for example to use the example I specified it would look like this:

<daniel@server ~>$ encfs /home/daniel/.crypt /home/daniel/crypt
The directory "/home/daniel/.crypt/" does not exist. Should it be created? (y,n) y
The directory "/home/daniel/crypt" does not exist. Should it be created? (y,n) y
Creating new encrypted volume.
Please choose from one of the following options:
 enter "x" for expert configuration mode,
 enter "p" for pre-configured paranoia mode,
 anything else, or an empty line will select standard mode.
?>

Standard configuration selected.

Configuration finished.  The filesystem to be created has
the following properties:
Filesystem cipher: "ssl/aes", version 2:2:1
Filename encoding: "nameio/block", version 3:0:1
Key Size: 192 bits
Block Size: 1024 bytes
Each file contains 8 byte header with unique IV data.
Filenames encoded using IV chaining mode.
File holes passed through to ciphertext.

Now you will need to enter a password for your filesystem.
You will need to remember this password, as there is absolutely
no recovery mechanism.  However, the password can be changed
later using encfsctl.

New Encfs Password:
Verify Encfs Password:

As you can see the directories don’t need to be created first. There is also a prompt for what security settings you want to use. Hitting enter will give you standard settings, but for something more powerful you should hit ‘p’ then enter. You can now proceed to place files in /home/daniel/crypt and they will be encrypted and placed into /home/daniel/.crypt. If you don’t believe me go ahead and check.

See? I told you so. Now you can unmount it using ‘fusermount -u /home/daniel/crypt’ and mount it again using encfs /home/daniel/.crypt /home/daniel/crypt and typing your password.

Random Thought: When travelling to other countries, local laws may mean that customs can search your laptop, including encrypted filesystems. You may have to reveal your key, or be arrested.

The Australian Mandatory Internet Filter

I’m ashamed that in today’s society I have to begin this post with this paragraph but I have to nonetheless. For the record I am absolutely opposed to child pornography, bestiality, sexual violence and rape. I am abhorred that people are involved in the production and distribution of such material and I strongly feel that these people need to be brought to justice. I feel strongly that the government needs to implement measures to catch and prosecute these people and to make such material impossible to produce or distribute. I do however believe that the Mandatory Internet Filter as proposed by Steven Conroy is the wrong way to go about this.

The Internet filter, quite simply put is technically infeasible. The filter will work by directing all requests from Australian users towards a site containing RC content to a filtering device. This device then relays all requests to that site to the actual server, unless a requests is made for a blocked page, which will instead return a page indicating the site is blocked. This is similar to the way the firewall in China and other countries with a national Internet filter. This method is effective in that it is often 100% effective (which means that every page on the blocked list is blocked, with no false positives) when done right. There is a problem however, this method does not scale well. If the government were to block a page on a large site (as was attempted to Wikipedia in the UK) then the filter would not be able to handle the load. Secondly it appears to the administrators of that site that all requests are coming from a few IP adresses. This could cause Wikipedia to eventually block all Australians either because the requests will look similar to a DDOS or because they have no way to distinguish between users and need to prevent abuse. Although the filter may be 100% accurate at blocking web traffic it will not be capable of dealing with many other varieties of Internet data.

The proposed filter will only be capable of filtering standard web traffic from web browsers. The Internet consists of a large number of computers talking in any number of protocols. While web traffic is one of these there are many other ways to exchange information. This filter will not be capable of filtering email, bit torrent, edonkey, gnutella, XMPP, DDC, SSH, VPN, TOR and that is only naming a small portion. Many people caught to have been in possession of child pornography and other illegal content are found to have downloaded it via peer to peer technology. This is because standard web traffic makes it easy to trace and identify the owner, where as peer to peer traffic can be hidden much easier. Secondly web traffic can be ‘tunnelled’ or hidden inside these other protocols and this way completely bypass the filter. This means anyone with sufficient knowledge or five minutes to learn will be able to configure their PC to hide their data amongst an SSH or VPN connection. These technical arguments come from my experience as a systems Administrator, but there are other arguments not so technical.

Steven Conroy has said that the filter will only deal with RC rated content, however there is no transparency about what will be blocked. The government can’t publish a list of sites that are blocked because that will effectively give people looking for this content a list of places to find it. Without knowing what sites are being blocked we won’t know if or when the government decides that they would like to start blocking sites that are debating for or against abortion, euthanasia or any other politically sensitive topic. It may be interesting to know that the definition for RC content includes pages instructing in any crime, which would include euthanasia. A representative for Steven Conroy has specifically stated the filter won’t be filtering pages related to euthanasia but because of this broad definition it could be changed at any time and we wouldn’t know until after the material was blocked.

I am a Unix Systems Administrator, and for the reasons listed above, and more covered better by other bloggers, I am opposed to the filter proposed by Senator Steven Conroy and the Labor government. I urge my readers who are also opposed to the filter to write to your local MP, to Senator Conroy, to Tony Smith (Shadow Minister Minister for Broadband, Communications
and the Digital Economy). If all else fails and the Government does not see sense then use your vote. The filter will not work and will waste taxpayer money that could be used in many better ways.

Random Thought: Will posting instructions about how to bypass the filter be illegal?