Some tidbits while tinkering.

Month: June 2013 (page 1 of 2)

SSO and Mobile

Thankfully, Apple have finally brought SSO to the table for mobile apps.

I think it’s really important – in fact, I was going to do a post how mobile apps are probably the best application of SSO, considering how users keep the device with them, and the current methods of authentication are painful (passwords on a mobile device take a long time and are error prone), and so many apps just save the password anyway to save the hassle.

The are some issues I see with SSO on mobile, and will need to be used carefully to avoid breaking two-factor authentication models, but it’s a huge win for business, and I hope it will be available via the SDK shortly.  I am really excited to see the new security features in iOS7, and keen to try them out!

Expect some posts about the Enterprise licensing (also massive improvement of enterprise management), and I’m keen to see the applications of the per-app encryption, and per-app VPN connectivity too.  It has simply huge ramifications for BYOD, and how it’s able to be accessed may be the difference between whether MDM remains relevant, or becomes even more integral to enterprise mobile management.

More on iOS7 to come for sure!

WordPress, Facebook, Tumblr and the Democratisation of Content Generation

The advent of mobile, web apps and marketing-based economies have fundamentally altered how content gets onto the web.  The need for technical skills to deploy content, and manage infrastructure is now almost non-existent, and with new technologies the focus for app developers (web and mobile) has been on how to provide a user experience that is easy, and feature-rich.

This democratisation of the internet has created an environment where high-quality content can be provided, and this environment has also been joined by a generation who have grown up with the internet so don’t see privacy issues in the same fashion as previous generations.

Add to this personal sensors and cameras allow content generation by our normal activities, and despite the rash of photos of peoples breakfast, we are in a perfect storm of ability to provide information of interest, with the technology to allow us to do it, and consume it as we go.

Now that cloud, and dedicated-application hosting (such as for WordPress) is also making access to the specialised high-quality hosting that is required to provide a reasonable, scalable web experience for higher-end prospects like popular bloggers is now accessible to the masses.


Big Data, while over-hyped, allows us to crunch huge amounts of data which previously were either impossible, or infeasible to do.  Thanks to Google for starting that revolution, and the overall work with BigTable and MapReduce, allowing the availability of tools like Hadoop and MongoDB.

With even more specialised hosting like AWS Elastic Beanstalk, Google App Engine and Heroku, startups can also develop services to scale to significant levels without the massive capital investment required previously, and also promoting much faster innovation than we’ve seen ever before.

It’s a fantastic time to be involved in internet services – the speed of innovation and the increased use of technology to provide better integration of our lives with the data we produce, and consume.


100% Compliance, but only 65% of the Fleet

One of the biggest issues in compliance and security is that in most IT shops, the ability to scan and reliably detect all the computers/devices on a network is lacking, and overall coverage of tools like Anti-Virus, Firewalls, Security Policy is poor.

The issue is highlighted in an example here by Daniel Wesemann.

From my experience, 65% is a pretty nice round number that would apply to most systems.

  1. At least 10% of most fleets would either be non-conforming by OS/Hardware/”Don’t Touch Me’ syndrome typically.
  2. Another 10% would be out of compliance due to poor management.
  3. Another 10% due to fleet turnover and poor on-boarding/decommissioning procedures, and
  4. 5% or so would be out of compliance because the admins choose to, or the systems have been forgotten.

The above applies even more in a typical desktop fleet where 1 and 3 come into play more.

So how do we combat this?  Well for a start, much like the SANS post, don’t rely on the tool itself for compliance, without comparing against the entire (known, normally <95% accurate) fleet numbers.  Then, if possible, use tools like network sniffers and IP scanning to identify resident hosts, and known talkers.  While this may not be practical for many environments, it’s often the only way to find some servers, particularly in virtualised environments, unless management tools to track virtualisation are used, and up to date (VMWare is normally good at tracking this for the purposes of their licensing model). There are plenty of paid tools to help in the tracking down of rogue network devices, but unfortunately I’m not aware of any good free ones.

Also, track down your local asset management people from finance, and make sure they are aware of all your assets, and compare that amount to your known amount.  The results almost never tally up, and it’s best for finance to ‘true up’ their asset books.





It seems unusual that news articles such as this would indicate so many people would quit the cloud over PRISM. The crux of the matter is that most of the places that people would host with, could be easily accessible to any law enforcement that really wants it anyway? The real news is how quickly they can get it, and that the court order frameworks are really just rubber stamping.  This is not a surprise to many in the security community – in fact, I think it shows they are just doing the job properly.

It’s key to note that should a company or person choose to host any of their data in a hosted location – it will be available to local law enforcement.  This isn’t anything new, only the data provided by major US companies is.  The worrying thing isn’t those that publish the information, it’s the ones that don’t.

Also of concern to people is that foreign governments are cooperating with the US in data collection.  To that I say – isn’t that the point?  Don’t we want Allies combining their data to allow them to pursue the investigations they need to? This is the real reason many hosters are not harping on about PRISM, and in particular why Australian hosters aren’t jumping on the ‘Host-Here- Avoid-PRISM’ bandwagon. They simply won’t know, or likely many know that indeed it does occur, or they’d be silly to risk a statement like that backfiring.

Users of any off-site service should know:

  1. Your information can be intercepted at any point by law enforcement or others if not encrypted from endpoint to endpoint. This isn’t new, but be aware of the issues with intra-datacentre traffic too.
  2. Any device can typically be imaged by law enforcement if they need to in the course of an investigation.  This is certainly more invasive and annoying for law enforcement than PRISM-like data collection, but possible.
  3. Information can (and should) be shared between jurisdictions if needed.  Again, not news, and less than the revelations around wholesale data sharing between intelligence groups – but the fact they can justify this level of expense means they were doing it a lot before anyway, but this method is cheaper, easier, faster.
  4. Personal data is available from many, many sources of probably equal scariness around your shopping habits, activities, things like search history, visited pages etc.  This information is given away by free apps to enable them to make money to provide services.  It ain’t cheap to be facebook – so how do they make money, off selling our data of course!
  5. Giving away data to foreign companies in terms of industrial espionage can, and is, in a lot of cases by some obvious candidates, state sponsored.  This sort of thing is absolutely to be worried about by large-scale corporates with useful IP, or large-scale deals in play. The ease of eavsdropping and bugging at all levels of the data/telco stack is huge, and shouldn’t be discounted.
  6. Encrypt. And keep those keys secure!
  7. Use anonymising services, but don’t think it protects you that much. Things like DuckDuckGo just make the tracking a little harder to get around (it’s unlikely they are using proper obfuscation techniques to prevent analysis of network traffic from identifying searches) and won’t stop information from being transferred via other uses of free services. Get used to it, or opt out of modern social networks and services.
  8. Intelligence services now have reams and reams of easily accessible data from the internet, and don’t need court orders to get at it.  That just means they are able to act at the speed they hope they would, and big data is being used for something other than marketing.  Which is good.

iOS Wireless Hotspot Cracking

As has been reported, the wireless hotspot feature of iOS has been cracked.

We keep getting examples of how while protocols are theoretically broken, it’s almost always more effective to go after the endpoint, either in terms of breaking the password, or breaking the actual password sharing, allowing an attacker to bypass actually having to break a protocol.

It highlights why, even though I’m sure the password generation was done with good intentions (something you could feasibly type on mobile devices without having to try multiple times, and probably based on a risk assessment of short-lifetime sharing in this case), we need to be careful about exactly how we make things easy for the user. If this had used a larger keyspace, and perhaps changed passwords each time you shared (I noticed this when using the feature myself, but changed the password manually, and don’t have sharing on for very long to make the attack feasible).

Security usability has come a long way, but still has a huge way to go until people can actually use the upper-end security features available in modern OS’s, without feeling trapped, or just not understanding the risks. We also need to be careful about unintended consequences, and making sure risk analysis still hold up after users actually get involved.

Extra Bluetooth Functionality in iOS7

Another really cool feature of ios7 is the new additional bluetooth capability.  Some major news sites have covered this, such as 9to5Mac, and the essence is the most complete bluetooth LE coverage in a mobile device so far.  So comprehensive in fact, I wonder how long it will take Android to catch up – particularly as we are still waiting for the official google bluetooth stack.

The ability to have push notification flow through to bluetooth devices, and the mechanisms to allow ‘always on’ will really improve the accessory market for iOS, and it certainly an area booming right now. I’m really looking forward to the advances we can get from Bluetooth LE! The iPhone, and to a lesser extent Android, will really have the capability to be the device ‘keyed in’ to a fully connected environment. I just wish the iPhone had more sensors onboard, but will see what the next iPhone brings.

Best Feature of iOS 7

For me, hands down the best feature of iOS 7 is the new ‘fetch’ API enabled as part of the notification service of iOS.  Previously, notifications were a 1-way message, and I’m sure I’m not alone in wondering why, if the app has registered, and been allowed to send notifications, there wasn’t a way to send a payload with the notification to avoid the user having to manually get into an app to do an update.
The fetch mechanism allows an app to have a special wake state to download information based on a wake-up event triggered from a notification.  So hopefully, no more need to manually update Words with Friends just because a single person has updated their move!

I’m keen to test it out soon, and see what protection is in place to stop apps from abusing this service to download large portions of data.  I would have preferred a small payload to be allowed in an individual notification, as this would be less likely to cause users to have unintended downloads, but this ‘fetch’ method is more flexible for those hosting their own server infrastructure, so it’s a great thing overall.

Up for 2 weeks and 629 Brute Force Hacking Login attempts

Seems as though the people scanning and attempting brute force hacking of wordpress installations are still very active. Still all the common admin usernames though.  I feel sorry for people with outdated or standard admin usernames, it’s just a matter of time – a huge amount of IP addresses were used.

Wish SEO worked that well!

DARPA Plan X and the commoditisation of cyber attacks

I don’t think I’m alone in wondering about whether the DARPA Plan X is a good idea.

The concept of moving to a realm where cyber-attacks are automated, able to be accessed by API, and easy to execute is, to be honest, already available in the virus world, and there is a valid case that if the technology is already out there that why not use it for ourselves?

That is certainly true, but the real question for me, aside from the issues arising from specifically developing this capability, and expecting it not to be actively targeted by opposing forces, is that if the concepts behind negotiating and launching attacks become commonplace, the defenses to them will become commonplace too.

We already have a number of tools available in the security community to profile and identify malicious traffic, and perform ‘virtual patching’ on vulnerable pieces of software.  An automated tool for executing critical attacks against adversaries is setting itself up to be blocked in a similar manner.  It also is unlikely to be able to utilize the fabled ‘zero-day’ attacks available to military forces, but neither will it be able to easily adapt in trying circumstances.  The only real benefit I see is that if the tool is automated, it may be able to automatically perform obfuscation or mutation of the attack in real time, much as sophisticated viruses do now.

In the end, making the attacks easy to automate may provide a benefit for physical forces who manage to breach the physical security of networks, as remote units could plant what is, in essence, a ‘cyber bomb’ from within an enemies network.

It’s the future, and unfortunately for security it’s the sign of an industry finally beginning to mature and see itself as less special. That’s a good thing really, but it’s a pity we still can’t adequately deal with the core issues around actually defending networks in an automated fashion like trust models, information sharing and adequately performing network-wise defensive mechanisms in ways we already knew how to do in the research world 10 years ago.

Why APIs are the new game in town

Enterprise has, for a long time struggled with how to share data with individual business units in a manner that was safe for the data, yet flexible and allowed data sharing without breaking business models.

I’ve always liked simple things, so watched in bemusement at the recent waves of data warehousing and ESB implementations go by. As a wise Professor of mine once told me ‘Almost any problem in IT can be solved by a layer of abstraction’ – though some abstractions are better than others :)

By my reckoning, there are serious issues in both approaches for business.
For data warehousing:
1. How to get data there, and keep it current?
Businesses seem to by and large just implement siphons of data from existing business systems, but leave out actually providing much value aside from perhaps reporting to the CEO how many widgets he sold yesterday. For highly streamlined logistics or supply chain businesses, this can be enough, but its hardly putting things to good use.
2. BI on data warehousing is traditionally very expensive, and expensive to maintain.
Most businesses don’t have good data analysis skills to ask the right questions, or understand the answer from BI tools.
3. Security is hard on big data sets, and they are ripe for abuse.
4. Changes anywhere in the data models have the tendency to break the transforms people have built up, and this can be hard to detect and correct.

As for ESBs, they have their own challenges:
1. Many, but not all are hard to configure, hard to maintain, and a single point of failure for the business.
2. Typically security models for the ESBs don’t exist, or aren’t implemented.
3. ESBs don’t necessarily solve any business problems, as they still need to be integrated at each touch point.
4. The business needs to decide how much of their business rules will be implemented in the ESB. This, while seeming simple, is hard.

So what is the solution?  Like a lot of things, it’s all about web startups doing their own thing showing traditional IT how it’s going to be done.  They use APIs.

The big move in the web now, is around publishing, and encouraging developers to integrate with, your APIs.  This is made a lot easier nowadays with the moves towards proper web application separation, and REST interfaces becoming the norm.  If you need speed, and don’t need complex data sets, you can use JSON with many API’s too. The use of REST and JSON as de-facto standards takes away the need to discuss or negotiate the method by which you convert and transmit data between business or functional units. Web Apps nowadays live and die by the integration they have with mobile, other web apps, and how you can get data in and out of them. Extensibility via plugins is also huge for many applications, and APIs are used for this too.

As businesses, the lesson we can take away from this is that structural separation via APIs is a workable model, and can provide benefits in allowing information interchange while still abstracting away the direct interface with data.

Of course there are challenges.  Building API’s is difficult and normally need in-house development, but the benefit for new players right now is that with so many public APIs out there, there are plenty of models to work from. And legions of developers able to complain in areas like stack overflow when an API turns out to be terrible.

Also APIs force individual data owners to enforce business rules through the specification of the API, and will need to implement their own security.  But in the end, this is what happens in practice in many organisations anyway, whether they like it or not.  Using a federated authentication is also a challenge, but again, the web has a range of OAuth and open source implementations that are ripe for the picking, and I would suggest bound to run into a whole lot less trouble than many of the big software vendors systems – but more on that in another post.

While APIs are probably the best solution for many businesses, there are a couple of cases where APIs probably aren’t best.
1. You need wholesale, fast access to data.  Data Warehouses are obviously built for this, and excel at it. ‘Big Data’ type problems are best solved away from the overheads of APIs.
2. You need a range of automation and business rules implemented between systems themselves. This is prime for ESB – but evaluate APIs first, as its too easy to move to ESBs based on problems that can be easily solved with APIs, even if you are dealing with 10’s of systems.

Older posts