Putting together the Athcon 2012 CTF - Part II - Network

Hi everyone,

We finally got around posting the long-promised Part II of "Putting together the Athcon 2012 CTF".

With the scenario out of the way it was time to setup the network and with it, shape-up the gameplay a little bit. At this point, we only had a very vague idea of how the participants would be finally rewarded.

The network

The network had to be realistic, with all possible little knobs and switches that you will possibly find on a real corporate network.

So we started by a simple segmentation of the systems in separate networks:

  • DMZ: Web, Mail, Database, VoIP PBX
  • LAB: Linux, OpenBSD, Solaris hosts with services ranging from Oracle databases to MySQL, Postgresql and even memcache servers.
  • SEC: Linux serving as central syslog and IDS (accessible only by the admins).
  • ETS: This was not part of the "corporate" network but it was there, this network hosted supplementary services for the entire CTF, such as registration for participants.


The DMZ

The DMZ was hosting several common corporate services such as web, mail and VoIP and a central database. Although, a normal dmz segment would have to be a bit more protected, in our case we decided not to enforce any filtering, in order to take advantage of the “security” mechanisms of each individual service.

For instance the database had carefully configured acl's binded to user/ip and limited access to data, but other than that we tried to keep the setup pretty basic in terms of security (even disabled security mechanisms). What's more; instead of exposing only 4-5 services for pen-testing, we now had more than 20 supplementary services.

  • Mail server: The mail server was running Postfix and Courier IMAP on a FreeBSD. It supported central authentication from the database and allowed each Administrator to have access to his personal email. The mail server had around 20 “eastereggs” (just to get the participants heated).
  • Web server: The web server was running Drupal on Apache with PHP on a CentOS Linux. The Drupal installation allowed for different levels of access on the available content, so guest users would see different pages from registered ones, and registered users would see different pages from users on the Admin group. The RoundCube webmail, and SugarCRM was also hosted on this system. This was an attempt to present the participants of the CTF with a corporate portal.
  • PBX: The job was handled by Trixbox. Just like the email server, every administrator had its own SIP account and voice mailbox.
  • Database sever: Our database server was on MySQL running on a Fedora Linux.


The LAB

Early on, we had to deal with a difficult decision, you could justify only so much services running on a network, so we needed a network with less restrictions than the DMZ, but at the same time keep it relative to the scope of the CTF and the company that this imaginary network belongs to. This gave us the LAB, a network segment that AcmeSec uses in order to audit different types of software and perform tests.

  • Solaris
  • Commerce Server: An OpenBSD running stock apache with PHP and MySQL that AcmeSec uses in order to test all sorts of commerce software such as Magento, osCommerce, ZenCart etc etc.
  • Oracle Server: A CentOS Linux running the Oracle 11g database
  • Honeypot Server: An OpenBSD running honeyd and impersonating many different hosts


The SEC

The SEC network was what the administrators were working on, it hosted a single Debian system with Snort, ACID, Snorby (that died at the last moment) and our very own Echofish. The system was used as a centralized syslog with Echofish as its web interface.

The system had multiple interfaces with no IP configured, attached on the other networks (LAB and DMZ) that Snort was using to sniff out attacks in order to help the administrators track down hackers. ACID was used as a web interface for the administrators to investigate events from Snort.


The ETS

This was the Echothrust Solutions network, that run in parallel with the CTF. This network was hosting CTF-required infrastructure such as

  • the registration system
  • the CTF administration interface
  • the real-time network visuals through Gource
  • the visualization of all apache logs through Logstalgia
  • The Score boards
  • The achievements boards

As one can imagine we had to very carefully design the services on this network and thanks to OpenBSD's rdomains, and a bit of MySQL trickery (hello federated tables), we were able to provide decent isolation on this segment.

Don't forget to check the relevant material at github.com/echothrust/athcon-ctf