Earlier tonight I read an article on Engadget about how an employee at Verizon had accessed customer data and sold it to a private investigator. It was information he was not authorized to access. This is not an uncommon thing. What is fairly shocking is that he had started in 2009 and was not caught until 2014. This brings into question the auditing practices of Verizon on the data, and also how they set up access control. Performing audits and looking for unauthorized access is not sexy or fun work. And it is labor intensive and costly. There are tools that can help with some of that. An example would be NetIQ Sentinel. Programs like Sentinel can be set up to monitor logs looking for abnormal behavior or data. Then it will alert the appropriate people to take a closer look. But to set this up you need to first get a base line of normal activity. You also need to try to determine what a breach might look like in any of a number of ways and put in the metrics for the application to look for that.
Along with using a program to monitor activity, it is important for humans to look through logs too. The human brain is very good at looking at patterns and also to notice abnormalities. It is like solving a puzzle. I remember years ago reading a book by Cliff Stoll called "Cracking the cuckoo's egg". It was a fascinating read and my first real exposure to computer security. The gist of the story is that Cliff was an astronomer who's grant funding had run out. So he took a job in the computer data center. This was back in 1986 during the days when mainframes ruled and had a whole slew of minions to keep things running. Students and college departments were charged for computer time. Cliff found a very small discrepancy in the accounting, something less than a dollar if I remember right. Cliff Stoll is somewhat eccentric along with being extremely smart. For a short period of time I actually had the pleasure of chatting with him on line and he is amazing and funny. At any rate, he knew something was wrong and would not drop it even when others said it was not worth it. I don't want to give away too much in case you want to read what is actually a fun book. But I will say that because he would not let the anomaly go he was able to ultimately track down a pretty serious security breach. Get the book... it is well worth it.
In my work with ldap one of the things I would often do is run reports on the system looking for accounts where no one had logged in for more than three months. It was not something that was mandated for me to do. But I knew from Cliff's book that this was an important thing to look at. I would then work with others to determine why we had stale accounts and deal with them. I would be willing to bet that in at least 50% of companies I could find accounts that were unused for over a year. It might even be higher. This is just one example of auditing that should happen regularly. Another would be to get a base line of how many ldap searches are performed through the day every day of the week. The thing is that all systems will have a normal pattern of those searches. It will go up and down through the day, but there will be a predicable level from day to day and week to week. Once you set base line you can then set up to monitor for abnormal changes in those levels. If the search pattern changes more than say 10 to 20% of normal levels then you should be alerting the security team to look at it right then.
The same thing is true of web site authorizations. Say you run Siteminder to protect the system. Every authorization request is monitored. You will have X number of proper authorizations an hour. You will also have Y number of failed authorizations because people typed a password wrong or forgot their password per hour. Once you sample those numbers for a base line then again you can set up a system to monitor the levels and alert when there is an abnormal change in the levels. You might need to set up a script that will query the log files to make counts and then stuff the counts in a database for trending. So there will be a fair amount of up front work. But once it is done you will be able to find issues. With something like the Siteminder logs you could even parse out the logs into a database that you could then search if you think you have a user doing things they are not supposed to and be able to find where they logged into.
Ready access to trend information
After you get things set up then you should test it periodically. There are a number of tools you could find that would help you stress the system to generate alerts. I live in Michigan and in the summer the first Friday of every month at noon they blow the tornado sirens so they know the system is working. You should do the same with the alerting system. Set up so that there is a sudden dump of a large number of ldap searches and make sure the charts trend it and the alerts go out properly. You want to do it periodically because you never know when a system change might break a script or something and suddenly the protection system is not working right.
Tighten down special access
All systems will have certain accounts that have special access. Systems administrators are a perfect example. However there are others too, like help desk personnel. They will have certain systems they need to do things that require higher access levels. Recently I went through a training on a program called CyberArk. The goal of the CyberArk system was to tighten up control on those special access accounts. First, it had controls over the passwords on system accounts. It could do single use passwords, or force periodic password changes. It helps get rid of the abysmal practice of using a password protected (a total joke) Excel spreadsheet with passwords in it. The program also could be set up so that a person had to go through a CyberArk proxy to get access to a system and then CyberArk would record absolutely everything the person did while on the system and was able to play back the session. This does not prevent someone doing something wrong, but it gives the data to discipline or prosecute them. And when users know they are being monitored that closely they will be much much less apt to try to do something they are not supposed to do.
It is also important to limit high level or system access to your systems. There should only be a limited number of individuals with that access. They should each have their own account. You should never use a shared account for all administrators to use. And there should be regular audits that will list out all users with special privileges in the system. This needs to be part of any audit, like a SOXS audit. Ideally the audits should be performed by people that can view the rights but cannot grant rights. This would be what is called separation of duties. It is similar to separation of duties concepts in financial audits. Or at the very least there should be multiple people that verify the information, like when a business requires two different people to sign checks.
Audits and security might not be sexy. They might cost money and not be a profit center. However, if you don't pay attention all along to this then something will happen sooner or later and it will cost you even more after the fact.
Security is often an afterthought in designing and maintaining computer systems. In the process a company can often have gaping security holes they are not aware of till it is too late. Also, over time it will end up costing way way more to accomplish what they want. You also end up with a very convoluted system that is difficult to manage and maintain.
One of the most overlooked areas is in managing identities on the system. This involves creating new accounts, easily assigning rights, and in the end removing accounts when needed. Most companies have more than one system that they have users accounts on. Many companies will send emails around to the admins of the various systems to create accounts and grant rights. It can often take days to provision a new user into the system. During this time the user is unproductive. And it takes so much time from so many people costing the company money in man hours. However, with a good identity management system user provisioning can be easily streamlined. There are a number of products out there like Oracle IDM, IBM Tivoli, Microsoft FIM, Sailpoint, and NetIQ IDM. Interesting side note, NetIQ IDM is the oldest and most established product of the bunch. It started as Novell DirXML way back in 2000 (an eternity in computer years).
So what does an IDM product do for you? Well at a high level it lets you automate user management over multiple systems. You can create the user in a single place, often called the identity store, and the IDM system will automatically send information through drivers or connectors to all the other systems to create the user on all the systems they need to be on. Typically this happens in near real time. So now instead of a new user creation taking days it can take minutes. I worked on one system where the main identity store was connected to three different LDAP directories, four different eDirectory systems, two different Active Directory domains, Lotus Notes, and an IBM mainframe. When a new user was created the account was processed through the entire system to all the different end points in less than 5 minutes. I never really tested for a more granular results, but it was very fast. And they have all the accounts they need. They won't find out that the account on system X was missed.
User changes are handled the same way. As a user works for an organization they often will be granted new tasks and responsibilities, or change positions. This will mean that their access will change. They might need access to a system, greater access than they had before, or they need access removed from a system because they maybe changed departments. So the change is noted in the IDM system and it will go through processes that will adjust the user accordingly in all the appropriate systems. It might only affect a single website, or it could mean a change in a half dozen different systems. The IDM system can handle all of that.
Deprovisioning a user, deleting them from the system, is just as easy as creating them. The appropriate person executes the workflow to remove them and in moments the account is dealt with on all the interconnected systems. Now you don't have the issue that they have one account sitting live for weeks because the admin for that account was on vacation. The account has been properly dealt with system-wide. It is not uncommon with systems that use a manual procedure to find accounts that have not been used in months, and sometimes even years. But even more dangerous is a stale account that a hacker or disgruntled ex-employee finds is still active. Because now they can use that account indiscriminately for an extended amount of time with no one noticing. If a hacker gets hold of an active account then the main user will typically notice fairly quickly. But if it is an account left from a former user then no one typically will notice it's use, at least for quite some time.
The IDM system will use something called drivers or connectors to interconnect systems. These drivers are configurable with what information they will share. You might have one system that you only need the user ID, first and last names, and the users managers name. Another system might need a dozen or more attributes on the user. You might actually need to modify the format of some of the information on it's way to the other system. Drivers are able to be adjusted accordingly. Most IDM systems can also be set up so that they will enforce rules like an identity can only be created in a single place. If they see an account created on an end point system the driver can be set up to reject or roll back the change on the end point system and notify the identity security team of the unauthorized attempt to create the account. It is important to consider where you want the authoritative identity source to be, and how you want to handle possible breaches. You also need to document what information needs to be where so the drivers can be properly configured.
IDM and workflow
So we have been talking at a high level so far about how IDM works. You initiate a new user creation, a change in a user, or removing a user and the system takes care of the task. But sometimes there are complexities to user creations in systems. In larger organizations there are often approvals that need to happen for certain accounts to be created or granted particular rights. This is where workflows become important. A top notch IDM solution will have at it's core the ability to create robust workflows. But what is a workflow? Think of a workflow as a series of rules or actions that need to be accomplished for the action to complete. The workflow is initiated and it will evaluate events, like approvals, and then take appropriate action.
So let's say you have a user that has been moved up to an entry level management position. In their new role they need to be able to access the HR records for all of the people that now report to them. They also need to be able to approve vacation requests and time sheets in the HR system. So the workflow is initiated to allow that access. The workflow might be kicked off by an assistant in HR or someone on the help desk. This person does not have authority to authorize the new rights. So the workflow knows it needs approval from another person. It will see the need for the approval and send a notification to that person that an approval is needed. Once the person approves the request then the workflow will proceed with granting the access to the individual and send them notification that they now have the appropriate access. However, what happens if the approver does not respond right away? Well the workflow might have an escalation rule set that says if the approval is not granted in say 4 hours then a notice will be sent to the approvers manager to let them know that the approval is waiting a response and has passed the initial SLA. Depending on the workflow the manager might then give approval, or it might allow for the manager to contact the approver to find out what the hold up is.
The thing with workflows is that they can be as simple or as complex as the organization needs. One suggestion is that you want to keep the workflow as simple as you can and as complex as you need it to be. Sometimes people make overly complex workflows and they become more of a burden than a help. The more complex anything is the bigger the chance of failure. This is as true in workflow design as it is in any other systems design. Strive for simplicity using the complex abilities only when truly needed.
When you create a workflow the first thing you should do is to determine the flow on paper of what you need. What is the end point that you need to achieve? What approvals do you need in the workflow. What is the SLA (service level - or timeframe) for the tasks at hand? Who does it need to escalate to? Once you have the flow down on paper then it is a simple job to build the rules into the workflow in the IDM system. Using a flowcharting program like say Visio is a huge help for this. And that flowchart can be added to any documentation you create. It is also a very good idea to document the workflow as you create it and put the documentation in a common depository for the entire IDM team. That way if you need to adjust the workflow down the road you can go back to the documentation and see what the requirements were when it was initially designed.
IDM and roles based security
So workflows allow us to assign users to the system and grant them rights. Good IDM systems will also simplify the granting of rights through the use of roles. What is a role? Well think of your organization and the people in it. Most organizations have fairly standard job titles for different positions within the company. These positions will have a defined list of tasks and responsibilities they have to accomplish. So a receptionist will have a certain, fairly limited, set of tasks and systems they need to access. It might be email, and a corporate directory, and the basic company internal social media site. We could call this a basic role. A help desk technician needs to do certain things and access certain additional systems. They might need access to the servers that hold software to install. Or they might need access to a licensing database for software. They also need access to the basic role. And a manager would need to some parts of the HR system so they can approve vacations, see pay levels for their employees, and recommend raises. Each of these sets of tasks and rights can be grouped together into roles. So if a person is a receptionist they get the basic role just like every other employee. A help desk person gets the basic role as well as the help desk tech role. The manager for say a development team would get the basic role and the manager role amongst some others.
So the best way to think of a role is to think of all the things this person does because of their position in the organization. It is sort of like groups in a way. The key for good role management is to think of all the things that anyone in that role might need that someone else in the role might not need, but if they get access it would not be a security risk. Again, we have the simplicity/complexity rule here. There is a tendency for organizations to make too many roles because they take the idea of least access to an extreme. There might be a system that not all help desk people will need access to. But there is a certain level of trust the organization has for help desk people that says they could be trusted to handle that information or access to that system. So maybe one help desk person does not currently help with system X, but they could just as easily be cross trained to help with that without changing their position in the company. There is no reason to deny the access to that system. So the role would be set up to grant access to systems A, D, and S, even though they currently only use A and S.
Down the road maybe the help desk person has gone to school and learned how to program. They get transferred to the development team for the internal Wiki. At that point their account would be adjusted to remove them from the help desk role and added to the Wiki development team role. The workflow for removal from help desk would remove the enhanced access to A, D, and S. The workflow to add them to the development team role which would then add them to the C and Q systems. Initially they might only use the C system, but as they learn more and are given more things to do in the team they start using the Q system also and they already have access to it.
Some final thoughts you want to consider when looking at an IDM solution. IDM solutions are very different in how they work. There are still some systems that will do things in a batch process methodology. This means you create, delete, or modify the user and a job is put in a queue that will be handled maybe once or twice a day. So it might be 12 to 24 hours before the user is created, deleted, or changed. Some systems might have a 15 minute lag in processing. The closer you get to real time processing the better the system. Most systems will handle some if not all tasks in near real time (nothing is instantaneous). The more responsive the system is the more secure your system will be. Also, the more productive your users will be because they will not have to be waiting for the system to process the request.
Third, what different platforms do you need to support? Are you only using Active Directory? Then you only need Active Directory connectors. Do you also need to support other LDAP directories? What about a SQL database that needs identity information? Do you want to include Linux systems in account creation? Maybe your a school that has a large influx of new users every fall where you need their account information brought in from a database system in the registrars office. So now you need an LDAP driver, a SQL driver, and a Linux driver as well. Make sure your IDM choice can support all the different platforms you need identity information on.
Fourth, another feature of a solid IDM system is the ability to track changes on users. Now you can have a solid audit trail on the systems. You might need this for SOX compliance, or for some other regulatory needs. Or someone on the system notices suspicious activity. Now you can track down where the account came from, or who changed it.
The initial implementation might seem a little overwhelming. You will need to talk with all the stakeholders for systems that need identity. However, once the system is implemented a good IDM system will streamline identity management, reduce overhead, and tighten security.
The news regularly has stories of security breaches on websites. Having worked with identity security for years, I have seen a lot of mistakes made that are really easy to avoid simply by making security an essential part of the planning process. Two things to consider in the security of the site are how you are going to store and secure the identities, and how the security model will be built and enforced consistently on the site.
Use a solid LDAP directory
Let's first tackle the identity store. This is the backend that will hold all the identity information on users of your site. Think about when you log into say Amazon, or Google, or your companies internal social media site. The website needs to know about you, and every other users. It will need to store user name and password at the very least. Often you will want to store mailing address and email address, maybe a phone number. For the company site you might store the employees department, their employee number, the managers name, and what location they work. All of this is stored in the directory. Often when I read of security breaches the cause was because of a SQL database breach. If you use an SQL database then you need to build all the security around the database from scratch.
There is a better solution than using a basic SQL database. There are directory products that are designed from the ground up to be secure identity stores. Some examples of these would include NetIQ eDirectory, OpenLDAP, and Microsoft Active Directory. Under the hood these are all databases. However, the structure of the database is designed specifically to store identities, and to easily set security rights to who can see what in the directory. And they have stood the test of time for being rock solid identity stores. Make sure you have a person versed in the particular directory product you decide to deploy. Like SQL, LDAP is a common language that is cross platform, this time specifically for access to identity directories. The directory will have a full suite of features for setting up security in the directory from who can access what information on different identities, to being able to have groups for people to be assigned to, and even how many user entries can be retrieved at one time. It also will have the ability to have multiple servers with the same information for automatic fail over in case of server issues.
Use a web security product to lock down the site
A lot of times web developers will write their own security on the site. They will write login pages, and use processes they write to enforce security. The challenge here is you have a person who's specialty is writing web pages trying to design a security infrastructure into that same site. In the process they have to constantly keep in mind all the different possible ways an attacker might breach the system and write programming to prevent it. The web designer needs to find all the possible holes in the site. The attacker only needs to find one way in.
The alternative is to use a program that is designed to protect the web site from outside of the website. Examples of this would be products like CA Siteminder, Oracle Access Manager, or NetIQ Access Manager. Using an access management solution moves the security workload out of the hands of web designers and into the hands of security engineers. The software is designed specifically to protect sites while at the same time making access as easy as possible for users. The software is designed to also give the ability for a single sign on experience in an enterprise so that a user can log into one website and subsequently access many other sites in the same organization.
When you are getting ready to deploy a new website you need to make sure to bring in your security engineers right up front to help with the design of the site. The structure of the site can significantly impact how easy it is to design the security model around it. It is much easier to secure a site if sections of the site are laid out in a specific way. There might be certain sections of the site which should only be accessed by a subset of the user population. If the pages of those sections are in their own subdirectory then it is very easy to design the policy to restrict that whole section of the website to a particular group or groups of users as listed in the LDAP directory. If those pages are scattered all over then it becomes much more difficult to secure.
So how does the access management software work? Well it sits in front of the actual website. When a person tries to get to the site then the AM software will intercept the request and evaluate the user trying to access it. The AM software will be responsible for requesting the credentials from the user and verifying them. Once they user is authenticated the AM software will let them in. That is the first step, authentication. Once the person is authenticated in the AM software could just let them in if they will have access everywhere. Otherwise the AM software will next take on the role of approving or denying authorization to the particular part of the system the user is trying to get to.
The main home page of the system typically will be available for all users of the system. So once a user is authenticated in they will simply be allowed to the site on the home page. They also will most likely be able to get to the help pages, the about pages, and the contact information pages. So there will be no need for authorization on those pages either. But there might be a section that is specifically for only managers. For that section you might create an LDAP group called managers. The AM product will then have a policy that will say only if you are a member of the managers group will you have access to this section of the website. Another section of the site might be limited to people in the finance and accounting departments. So the users could be in a group called finance, and the AM policy would be set to only allow users in the finance group to those pages.
As we look at these hypothetical examples you might start to see why good planning up front is important. If you can collect all the pages and code for a particular part of the site in the same space then as you create new pages all you have to do is put them in the same subdirectory and they will automatically be protected with the same protection as all the other pages. If you put all the pages in the same place, or spread them all over the site then you will have to specify each of the pages individually for the proper protection. As new pages are created the web designers would need to make sure to inform the access management administrators of the addition so that they can be granted the proper protection. The more difficult the security model is the more of a chance of creating holes that an attacker can breach. Complexity always leads to poor security.
Along with being the traffic cop over the websites of the organization, a solid access management software program will also give solid auditing features. In recent years monitoring access and being able to answer auditors questions has become more and more important. Also, if there is a suspected breach it is vital to know who did get in, and what they got access to. You need to be able to determine what information might have leaked out of the organization. And you need to be able to have the evidence to take care of the situation. You also need to know if someone is attempting to get to a place on the system they are not supposed to so you can respond appropriately.
Products like Siteminder, Oracle OAM, and NetIQ Access Manager will keep logs of all the authentications and authorizations that were approved, and where on the site they were allowed, listing who accessed what. They will also record any denied authentication or authorization attempts. There are even programs like NetIQ Sentinel that can monitor these logs and watch for suspicious activity. Obviously there will be times a person accidentally stumbles on the wrong page of a site and gets denied. However, if the log starts to show a flood of repeated authorization denials to a particular part of the site by the same user account, or a sudden increase in denials by a lot of accounts, then it is important to be notified of this and be able to react. So a program like Sentinel can monitor the logs and look for that suspicious activity. Then you can have rules in place for notification of the suspicious activity. You can set up for email alerts, pages, and even escalation if something is not dealt with in a timely manner.
It is important to also set up a process for storing those logs for an extended period of time. Some of the vendors will have add on products to deal with the logs over time. These products will parse the log files into some sort of database that can then be searched and reported on. Or it is often possible to have a script that would retrieve the logs and push them into a SQL database. Once there they can be easily searched and reported on. Now if you end up suspecting a breach by an employee you can go back and find proof where they were on the site. This will give you the solid evidence you need to deal with the situation and the user.
Some final notes on network design
When you are laying out the system there are a few things to consider in the design of the overall system. First is the placement of the LDAP directory. Typically you will not want your directory exposed to direct access by users. So the directory should be at least on an internal network segment behind a firewall. Typically the websites that are Internet facing will be in the DMZ, or possibly behind a proxy. There are several ways to set that part up. But the LDAP servers need to be behind that. It is not a bad idea to maybe even put the LDAP servers in their own network island behind a firewall inside the internal network. If you want users to access directory information they should use some sort of website that would offer that up. Often they will use the directory in their email program. But there could also be a protected website (yes using an AM product to protect it) that would present them with access to the directory. Identity information is very important to protect from access. The LDAP servers should also be set up to prevent wholesale extraction of data too. It is possible in most LDAP directories to set up servers so they won't return more than a particular number, say 100 entries, to a wildcard search. It is also possible to set up a timeout limit too. Think of all the important information being stored in the directory. This will help prevent your organization ending up on the news. Some people might have a need to get reports on the entire directory, and for that you can set up a single server in the cluster of servers that would allow that access. Again, limit access to that server to a very specific set of people.
Another thing to consider is the setup of the different parts of the AM solution. Many of those solutions have multiple servers or parts that make up the solution. Put only the parts that you need to into the DMZ. The rest of the solution should be on the internal network segment. The guiding rule should be least access. You want to make sure that the most critical parts of your infrastructure are better protected. So for example with Siteminder you will want the policy servers on the internal network while the web agents are of course on the web servers in the DMZ. You will want to limit access to any identity and access management system to the smallest group of people possible.
There is a new wireless networking standard out that gives impressive improvements over the older technology. The standard is called 802.11ac. With all the new multimedia uses of wireless networks demanding more and more data be streamed in real time the speed increase could not come fast enough. We have moved beyond the days of simply viewing static websites with some images on them. More and more people are watching YouTube videos, listening to audio streams from Pandora, Spotify, or Apple Music, or watching movies and television shows on things like Hulu, Netflix, and Amazon Prime. Gamers want low latency connections for real time gaming too. The new 802.11ac wireless networking can affordably bring this to you.
802.11ac works in the 5 GHz frequency. This allows for faster throughput. It is also fully backward compatible with the earlier 802.11n and 802.11g technology. The 802.11n specification had a maximum theoretical throughput of 300 Mps. The newer 802.11ac will go from 433 Mpbs up to several gigabits per second. This means that 802.11ac networking is faster than a USB 2.0 connection, and potentially just as fast as a USB 3.0 connection. Now wireless access to NAS, or network attached storage, can be just as fast as a wired connection to those same hard drives. As people move more and more from using desktop computers to laptop computers NAS storage becomes more and more useful. You can be anywhere in your home or office and have access to massive amounts of storage that is also very fast to access.
How 802.11ac Works
So how do they pull off all of this magic? Well first 802.11ac works exclusively in the 5GHz spectrum. The 2.4 GHz spectrum is way overcrowded with all sorts of wireless devices from baby monitors to security cameras, as well as older standard wireless networking. And even 802.11n used both 2.4 GHz and 5 GHz. So the engineers realized they needed to stay exclusively on the more wide open road of 5 GHz to push all that data over.
Second, 802.11ac uses up to eight spatial streams (MiMO technology). 802.11n only was able to use up to 4. So this allows a much wider footprint to push all that data through. The channel width for 802.11n was 40 MHz wide. The 802.11ac standard is 80 MHz and those can be doubled to 160 MHz. So you are looking at a difference of 4x40MHz compared to 8x160MHz.
Third, 802.11ac implements beamforming. The 802.11n standard had it but it was not standardized between vendors. What beamforming does basically is allows the router to focus where it is transmitting the power so that more of it gets to your device. It basically looks for your device and focuses the energy your way. This can boost throughput and improve signal quality.
Obviously to make use of the new standard you will need a new 802.11ac router. Your networked devices will also need to support the 802.11ac standard. For desktop computers you can simply replace the existing network interface card. For laptops you would need to get some sort of USB attached network adapter. You would have to research about if there is a way to upgrade other devices on your network. Rest assured that newer devices will be coming out that support 802.11ac wireless networking. If you are buying new network attached devices you should make sure to ask if they support 802.11ac wireless networking!
I am truly a geeks geek. I have worked in computers for over three decades. I have worked on mainframes, Unix systems, Linux before almost anyone knew what it was, and many other systems. I love computers, and love making them do things people think is impossible.