-
Notifications
You must be signed in to change notification settings - Fork 299
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
stateful packet_filter #344
Conversation
This is a very pleasingly short implementation :-) Tests covering the interesting cases (including filling up tables) would be great. I will think about whether the Neutron front-end would have any more requirements on the API. I don't think of anything immediately. |
@eugeneia Do you have an idea for how we should integrate this into the Neutron config? Goal is to be able to selectively enable stateful filtering e.g. on a per-port basis. |
003b963
to
d15e1e5
Compare
Added real tests and squashed commits. Should be ready for merge if the snabbot agrees. |
@lukego Is adding column to the "security rules" table an option? |
@eugeneia That is probably hard compared with extending one of the JSON fields. (Risk is that we have to update the OpenStack database schema, get mixed into their ORM, and slog for years to sync with upstream.) |
@lukego One way I see is to add a (I assume OpenStack's secrules table spec implicitly describes a stateless PF mechanism?) |
@eugeneia what test coverage do you think we need for this in the NFV context? I'm thinking something like:
and to get some performance visibility with iperf both with table quite full and table quite empty. |
@lukego I think semantics are best tested in the PacketFilter selftest, which has good coverage imho. Regarding performance, I feel like I don't really understand how the stateful PacketFilter works. When is a table "full"? |
Sorry, my explanation is not a good match for the code, and maybe I am really suggesting requirements for the implementation that I should have written down earlier :-). My bad! Questions:
A reasonable answer might be: The table tracks up to 1000 connections. Once the table is full, no new connections are allowed until an existing entry times out. Performance is similar for all table sizes: you always get >= 9 Gbps in iperf. A denial of service attack spamming random ports will temporarily block new connections but will not consume especially much CPU or memory. A problematic answer would be: The table size has no fixed limit. Connections are added without bound. Performance of the Snabb Switch process will degrade as the Lua garbage collector is invoked and swap memory is needed. A denial of service attack will cause the Linux "Out Of Memory Killer" to be invoked. For current purposes it is more important to be stable and predictable than to be very fast or scalable. The main use case for stateful filtering at the moment is on legacy management ports that may be carelessly exposing themselves to attack in some unpredictable way. (Like when chur was hacked via the Supermicro management port that was reachable via the internet and woefully insecure.) |
On Tue, Feb 10, 2015 at 10:55 AM, Luke Gorrie notifications@github.com wrote:
since the connection tracking tables are just Lua tables, the answer since Lua tables don't report the number of items stored, the easiest the "right" solution would be to replace the tables with something actually, a fixed-size hash table isn't so hard to do... Javier |
Short term it seems like the simplest solution would be to stick with Lua tables but have a counter to keep them small? And have a test case for simple DoS? |
Agree that sooner or later we will want an FFI table that handles large datasets gracefully. I've used Judy arrays in the past but they are not very Snabbish (big and complex). I don't think we need this acutely yet though. |
What ballpark are we talking about for the size of these tables? Thousands? Millions? I suppose that one usually doesn't track the state on incoming connections but rather on outgoing ones and therefore the incoming DDoS will hit a (hopefully) rather small table, correct? If on the other hand one tracks incoming connections I think this could easily blow up. DDoS attacks are easily in the millions of new states per second... |
Current requirement seems modest to me: between 100 - 10,000 entries would be adequate for protecting a management port. Overload behavior (if spammed with legitimate connection requests) would be to deny new connections without slowing down the process (i.e. affecting other ports or previously accepted connections). Future requirements will be tougher. If we were talking about NAT/firewall for all traffic on an ISP we might be talking more about more like 1M sessions per gigabit of traffic. I'd prefer to cross that bridge when we come to it though rather than trying to anticipate future needs without a realistic test case. |
(Tracking only outgoing might be reasonable in this case. I do think we need predicable overload behavior in that case too though e.g. in case somebody innocently runs nmap.) Generally for NFV I reckon that stateless filtering is more practical. The main value of stateful filtering now is to support deploying e.g. management ports that you have zero confidence in security-wise e.g. could have a telnet server running in the ephemeral port range or something nuts like that. Make sense? |
On Tue, Feb 10, 2015 at 11:37 AM, Kristian Larsson
just throwing numbers around, i guess tens of thousands shouldn't be a
right. unlike iptables, it's explicit which connections are tracked, for example: let pass all incoming SYN packets, track outgoing SYN+ACK that way, a SYNflood wouldn't fill the connection tables, and if any
right. with high performance networks it's easy to overwhelm any Javier |
I would like to get a test case up and running that shows us where we stand e.g. run background traffic with randomized addresses and see what happens. I would not be surprised if the lookup operation itself will be a bottleneck due to allocating interned strings for hashtable keys. (I'd wager a beer that LuaJIT will not sink those allocations.) Then we might need to switch to an FFI table immediately. But speculating about performance is a dangerous habit... |
This is a much better API for stateful filters. In short: there are named "connection tables". If a filter rule includes a
state_track = "name"
entry, any packet that passes that specific rule is tracked in that table. Astate_check = "name"
entry adds to the rule the condition that this packet belongs to an existing connection. Either of these lines can also appear 'naked', outside of any rule, which adds a state checking and/or tracking before any filter test.The code seems clean and readable to me, but still need some real tests; the current
selftest()
just exercises the different options to show the generated code and track tables.