So I am called a grey beard in the industry because I am older. Well in my case it is literally true, I have a beard and it is grey. So when I graduated high school along with luggage another big graduation present would be a typewriter. Yes I said typewriter. And actually once I got to college it was surprising how many others did not have one and would regularly want to borrow mine for typing papers. Well times have definitely changed. Kids no longer use typewriters, and many probably have never even seen one (other than maybe in a movie). They all use computers now. Maybe they currently use a family computer, and you are looking to get them a computer for college. But the question then becomes what computer to get. Well the first thing you need to consider is what they will need to do with it. This links with the age old question of "what are you studying". The reason this is important is that it will make a huge impact on what is the best choice, and how much you need to spend.
So obviously the two biggest uses of the computer will be research and typing papers. All the computer options will handle this. If they are going into social sciences, history, or some other discipline where they will not be doing computer programming then they very well could get by with a Chromebook. They can then use Google Docs to type all their papers, and Google Sheets for spreadsheet work. The Chromebook will set you back $150 to $300, with the bulk of them running about $190. So the computer is very inexpensive. It is not going to be a huge target for theft. And all their data is in the Google cloud. So you don't have to worry about losing that important paper if the computer crashes. They will be able to still do research, waste time (I mean keep in touch) with Facebook. They can video conference with family with Google Hangouts. In short for probably 70% of all students this will do everything they need. It is also a great choice for that graduate that is not heading off to college too.
Now if your prodigy wants to go into say computer science, math, or is pre-med then maybe a full computer is a better choice. They might need to be able to load and run special programs. At this point the best choice is probably a Windows 10 laptop. Go with a laptop and not a desktop here because they will want to/need to take it around campus to work on it. Desktops are so last decade unless it is a high powered gaming machine. Along with Windows 10 laptop make sure to get them a subscription to Crashplan to back up the hard drive into the cloud. Don't even think about relying on them using an external hard drive to do backups on their own. You don't want to be that parent with a kid crying on the phone three days before the end of the semester because their hard drive crashed, they have not backed up since Halloween, and they just lost the research paper that is due in two days. With Crashplan they can login and get to the backups from anywhere. Of course if they are smart they will have used Google Docs and will be able to get to their stuff from anywhere. But if it is a really complicated paper that they needed to use a local copy of Word then at least the document is safe in Crashplan. Or if they have a fancy stats program all the data is in the cloud and they can load the program on a new computer (or the now fixed computer) and download the data and they are good to go.
If they are planning on being a film student, art, photography, or graphic design it used to be a given they would run a Mac. But that is not as true as it used to be. Mac is still run by most people in film and art. But pretty much all apps are available on Windows as well. So you could go either way. The big thing here is don't go cheap. They will need a fair amount of memory and processing power to run those programs. If you are going with Windows you might want to look at a gaming laptop. The specs you will need in a good video or graphic design laptop are the same as what a gamer will want. You need a really fast processor, a dedicated video card with plenty of video memory, lots of RAM, and good cooling for it. My Windows 10 computer is an ASUS Republic of Gaming laptop that I got for $1,200. It is a beast. I can render video so well on it. And Photoshop works really well too.
If your student will be doing graphic design or other art work then a Windows Surface is a solid choice. Something with a stylus will be really good for doing that art work. You can do it with a laptop and a digitizing pad, but that is so last decade. And it is much less intuitive than the touchscreen on the tablet/laptop convertible device. You want to make sure that the stylus is pressure sensitive.
A higher end Macbook Pro will do really well for the film or art student also. Or if they are wanting to do like graphic design you could get the larger iPads and then get a keyboard to go with them to do writing and such. Again, make sure to get Crashplan.
So the big news for May of 2017 in the computer world so far has to be the Wannacry worm/ransomware. I will be surprised if we see something more dramatic than this, at least in the security realm. To start with what is Wannacry? Well Wannacry and it's variants are code that is called ransomware. This is a program that gets into your computer, encrypts all your files, and then flashes a message on the screen for you to give money through something like Bitcoin to get the unlock code to decrypt the files. Surprisingly the people that do these hacks almost always will send you the decrypt code once you pay. It is in their best interest to have good customer service for this. Then you put the code in and you get your files back. Often the cost is small, between $100 and $500 are very typical costs. They want it easy enough and cheap enough that you are very willing to pay. In the case of Wannacry it looks like it was also attached to a worm, which is code that can jump between computers too (at least in some variants).
The thing with ransomware is it is not discovered by antivirus or anti-malware programs. This is why it can so often get into people's systems. And once it encrypts your files you are pretty well done for if you had not taken precautions ahead of time. The thing is that if you are impacted by ransomware that means you are not doing what you should to protect yourself from a number of other potential issues either. I have worked in computers now for over 30 years. Yeah I started in the dark ages. I remember playing text based computer games. And I also played the very first FPS game when it first came out. But I digress. So over the years I have seen people repeatedly lose precious data simply because they did not back it up. I have seen companies lose thousands of dollars when systems went down because of a lack of simple protections. And every time they wish after the fact they would have had the backups and such. So what do you do to protect your system?
Back up all your data!!!
The first and most important thing is to back up all your data. The easiest way to do this is to get a couple of external hard drives (yes I said a couple) that are at least 1.5 times the size of your computer hard drive. Then you backup to a drive. Then you take the first drive to trusted friends or family members house. Or you can put it in a safe deposit box. Next you back up to the second drive. Then you swap the drives on a weekly basis. That way you have one drive that is not even at the house in case something like a fire happens. This also protects you from ransomware because it might encrypt the drive connected to your computer. But it cannot get to the other computer. Any drive on your computer that is showing up with a drive letter is susceptible to encryption by ransomware. You also want a drive that is not plugged into electricity in case of a lightning strike on your house. I know a person that had their external drives and computer all toasted from a strike. Sure they used a surge protector. And yes the surge protector company paid for all new equipment. But the company could not get their data back.
Another option is to back up to the cloud. There are a number of good services. They include companies like BackBlaze, Acronis True Image, and Carbonite. My favorite is Crashplan. There are a number of reasons I like them. They are one of the more cost effective ones. They also work across multiple operating systems. And other good one is that you can get a login and install the software then you can connect a hard drive to the computer of a family or friend and then load Crashplan on there also. Then you can backup across the Internet to that hard drive and not have to shuttle hard drives. And it is totally free to do that. And yes it is completely encrypted. There are some other nice features to paying for the online cloud system from Crashplan, like being able to get to your files from anywhere by logging into your account on Crashplan. Go to their site for more information. If you decide you want to use one of the others then they are also rock solid options. You can google "cloud backup service" to get a listing of all the current offerings. Since the drive is not a mapped drive the ransomware cannot encrypt it. You could even use a cheap computer in your house as your backup server with Crashplan and back up all your computers to that. You don't have offsite backup. But it is way better than what most people have.
Oh, if you are running a Mac then you have a wonderful tool called Timemachine! Use that with the external drives and you have an amazing backup solution. No there is nothing like it on Windows or Linux that I have ever found. Restoring a Mac from Timemachine is so easy. And you can restore older versions of files too. But incremental file backups is outside the scope of this article.
Make a disk image
So with Windows if you lose your primary hard drive it is a pain to get back to a usable system, even with data backups. You need to install Windows, then all your security patches, then all your drivers, then all your applications, and then configure things like your Wifi and printers. Then you restore your data and you are back to golden. But this can take days. BTW it is way easier on a Mac. And Linux is somewhere in-between. There is a good solution for this though. It is called disk imaging. So here is what you do. You get another external hard drive. You plug it into your system after you get your applications and such on it. Then you use software like Acronis True Image, Ghost, Clonezilla, etc. I have used several, and most of them are very easy. And this is not a software review. Sorry, your on your own to pick one. Anyway, if your hard drive crashes, or is totally encrypted, all you need to do is to (be a little careful here if it is ransomware so you don't muck up the other drive) boot to the drive imaging software, attach the hard drive with the hard drive disk image, and start the process. After a short time (this varies depending on the size of the drive) you will be able to reboot the computer and it will look exactly how it did when you made the disk image.
Once you get the computer back up and running you simply restore your data from the backups (data is that stuff you make separate from applications and such - like photos and documents). Now you are back up and running and happy. The key here is that each time you install new drivers, or new applications you want to make a new image. And periodically you will redo your image also since security patches and updates come down once or twice a month. Oh, and keep the hard drive with the disk image off site when you are not using it. The safe deposit box is a good one for this too. That $35 a year to the bank is starting to sound better now eh?
patch, patch, patch
The final word of advice is to make sure your computer is getting regular patches. Usually the computer will automatically download patches in the middle of the night. If you usually turn the computer off you might want to pick one night a week to leave it on overnight. Make sure that it is also set up to automatically bring down patches. The other thing you can do is simply manually request updates. But we will almost always start to forget. Then bam you are infected. Oh, and patching also includes regularly upgrading to the newest versions of the OS. Wannacry is primarily affecting those still running Windows XP which has been out of support for over 2 years now. And it is 4 major versions of Windows back. Seriously, it is time to upgrade, and has been for several years now. Yeah you will need to learn the quirks of Windows 10. But it is better than running insecure software. So many issues are caused by running old operating systems and computers that have not been patched.
Often servers are in a server room that is not convenient to get to. If you use PowerShell to administer your network and want to use PowerShell from your workstation to administer Active Directory (AD) then you need to install the Remote Administration tools on your workstation. The procedure is fairly straight forward.
With Windows 7 or Windows 8 you need to download the Remote Server Administration Toolkit. Then for Windows 7 and Windows 10 you go to Control Panel to turn on that Windows feature. With Windows 8 installing the RSAT kit you also turn on the services automatically. Then you simply have to import the AD module in your PowerShell session and you have access to all the AD cmdlets, and the PSProvider for AD.
It is always a challenge to be a systems administrator on a network. There are so many computers to manage. It can take a long time if you go from computer to computer to log in and do things. For Linux there were a number of good tools to get your power user on with. But Windows was always more of a struggle. With PowerShell though it has become much easier. This is in part due to the object oriented nature of PowerShell. Even something as simple as getting file listings can be enhanced with the way PowerShell handles the task. You can gather the information and then have full control over what all you can do with it. You can also access much more information. It will be really easy to save this information off to files as well.
Along with being able to perform the basic simple tasks, PowerShell allows for expanding to do the same task on multiple computers with one or two commands. You can easily connect to any number of multiple computers with a single PowerShell command simply by using an array that lists all the computers, or putting them in a file and then bringing in the file into an array. Maybe you need to gather the log files from a list of computers every day. Now instead of spending an hour or two going from computer to computer you can simply put the list of computers in a single file, and then with one command you can have PowerShell walk through all the computers and gather the log files for you.
If you are not familiar with PowerShell then you might want to check out my Introduction to PowerShell course on Udemy. In the course you will be able to develop the solid foundation in PowerShell you need as an administrator of computers.
Earlier tonight I read an article on Engadget about how an employee at Verizon had accessed customer data and sold it to a private investigator. It was information he was not authorized to access. This is not an uncommon thing. What is fairly shocking is that he had started in 2009 and was not caught until 2014. This brings into question the auditing practices of Verizon on the data, and also how they set up access control. Performing audits and looking for unauthorized access is not sexy or fun work. And it is labor intensive and costly. There are tools that can help with some of that. An example would be NetIQ Sentinel. Programs like Sentinel can be set up to monitor logs looking for abnormal behavior or data. Then it will alert the appropriate people to take a closer look. But to set this up you need to first get a base line of normal activity. You also need to try to determine what a breach might look like in any of a number of ways and put in the metrics for the application to look for that.
Along with using a program to monitor activity, it is important for humans to look through logs too. The human brain is very good at looking at patterns and also to notice abnormalities. It is like solving a puzzle. I remember years ago reading a book by Cliff Stoll called "Cracking the cuckoo's egg". It was a fascinating read and my first real exposure to computer security. The gist of the story is that Cliff was an astronomer who's grant funding had run out. So he took a job in the computer data center. This was back in 1986 during the days when mainframes ruled and had a whole slew of minions to keep things running. Students and college departments were charged for computer time. Cliff found a very small discrepancy in the accounting, something less than a dollar if I remember right. Cliff Stoll is somewhat eccentric along with being extremely smart. For a short period of time I actually had the pleasure of chatting with him on line and he is amazing and funny. At any rate, he knew something was wrong and would not drop it even when others said it was not worth it. I don't want to give away too much in case you want to read what is actually a fun book. But I will say that because he would not let the anomaly go he was able to ultimately track down a pretty serious security breach. Get the book... it is well worth it.
In my work with ldap one of the things I would often do is run reports on the system looking for accounts where no one had logged in for more than three months. It was not something that was mandated for me to do. But I knew from Cliff's book that this was an important thing to look at. I would then work with others to determine why we had stale accounts and deal with them. I would be willing to bet that in at least 50% of companies I could find accounts that were unused for over a year. It might even be higher. This is just one example of auditing that should happen regularly. Another would be to get a base line of how many ldap searches are performed through the day every day of the week. The thing is that all systems will have a normal pattern of those searches. It will go up and down through the day, but there will be a predicable level from day to day and week to week. Once you set base line you can then set up to monitor for abnormal changes in those levels. If the search pattern changes more than say 10 to 20% of normal levels then you should be alerting the security team to look at it right then.
The same thing is true of web site authorizations. Say you run Siteminder to protect the system. Every authorization request is monitored. You will have X number of proper authorizations an hour. You will also have Y number of failed authorizations because people typed a password wrong or forgot their password per hour. Once you sample those numbers for a base line then again you can set up a system to monitor the levels and alert when there is an abnormal change in the levels. You might need to set up a script that will query the log files to make counts and then stuff the counts in a database for trending. So there will be a fair amount of up front work. But once it is done you will be able to find issues. With something like the Siteminder logs you could even parse out the logs into a database that you could then search if you think you have a user doing things they are not supposed to and be able to find where they logged into.
Ready access to trend information
After you get things set up then you should test it periodically. There are a number of tools you could find that would help you stress the system to generate alerts. I live in Michigan and in the summer the first Friday of every month at noon they blow the tornado sirens so they know the system is working. You should do the same with the alerting system. Set up so that there is a sudden dump of a large number of ldap searches and make sure the charts trend it and the alerts go out properly. You want to do it periodically because you never know when a system change might break a script or something and suddenly the protection system is not working right.
Tighten down special access
All systems will have certain accounts that have special access. Systems administrators are a perfect example. However there are others too, like help desk personnel. They will have certain systems they need to do things that require higher access levels. Recently I went through a training on a program called CyberArk. The goal of the CyberArk system was to tighten up control on those special access accounts. First, it had controls over the passwords on system accounts. It could do single use passwords, or force periodic password changes. It helps get rid of the abysmal practice of using a password protected (a total joke) Excel spreadsheet with passwords in it. The program also could be set up so that a person had to go through a CyberArk proxy to get access to a system and then CyberArk would record absolutely everything the person did while on the system and was able to play back the session. This does not prevent someone doing something wrong, but it gives the data to discipline or prosecute them. And when users know they are being monitored that closely they will be much much less apt to try to do something they are not supposed to do.
It is also important to limit high level or system access to your systems. There should only be a limited number of individuals with that access. They should each have their own account. You should never use a shared account for all administrators to use. And there should be regular audits that will list out all users with special privileges in the system. This needs to be part of any audit, like a SOXS audit. Ideally the audits should be performed by people that can view the rights but cannot grant rights. This would be what is called separation of duties. It is similar to separation of duties concepts in financial audits. Or at the very least there should be multiple people that verify the information, like when a business requires two different people to sign checks.
Audits and security might not be sexy. They might cost money and not be a profit center. However, if you don't pay attention all along to this then something will happen sooner or later and it will cost you even more after the fact.
Security is often an afterthought in designing and maintaining computer systems. In the process a company can often have gaping security holes they are not aware of till it is too late. Also, over time it will end up costing way way more to accomplish what they want. You also end up with a very convoluted system that is difficult to manage and maintain.
One of the most overlooked areas is in managing identities on the system. This involves creating new accounts, easily assigning rights, and in the end removing accounts when needed. Most companies have more than one system that they have users accounts on. Many companies will send emails around to the admins of the various systems to create accounts and grant rights. It can often take days to provision a new user into the system. During this time the user is unproductive. And it takes so much time from so many people costing the company money in man hours. However, with a good identity management system user provisioning can be easily streamlined. There are a number of products out there like Oracle IDM, IBM Tivoli, Microsoft FIM, Sailpoint, and NetIQ IDM. Interesting side note, NetIQ IDM is the oldest and most established product of the bunch. It started as Novell DirXML way back in 2000 (an eternity in computer years).
So what does an IDM product do for you? Well at a high level it lets you automate user management over multiple systems. You can create the user in a single place, often called the identity store, and the IDM system will automatically send information through drivers or connectors to all the other systems to create the user on all the systems they need to be on. Typically this happens in near real time. So now instead of a new user creation taking days it can take minutes. I worked on one system where the main identity store was connected to three different LDAP directories, four different eDirectory systems, two different Active Directory domains, Lotus Notes, and an IBM mainframe. When a new user was created the account was processed through the entire system to all the different end points in less than 5 minutes. I never really tested for a more granular results, but it was very fast. And they have all the accounts they need. They won't find out that the account on system X was missed.
User changes are handled the same way. As a user works for an organization they often will be granted new tasks and responsibilities, or change positions. This will mean that their access will change. They might need access to a system, greater access than they had before, or they need access removed from a system because they maybe changed departments. So the change is noted in the IDM system and it will go through processes that will adjust the user accordingly in all the appropriate systems. It might only affect a single website, or it could mean a change in a half dozen different systems. The IDM system can handle all of that.
Deprovisioning a user, deleting them from the system, is just as easy as creating them. The appropriate person executes the workflow to remove them and in moments the account is dealt with on all the interconnected systems. Now you don't have the issue that they have one account sitting live for weeks because the admin for that account was on vacation. The account has been properly dealt with system-wide. It is not uncommon with systems that use a manual procedure to find accounts that have not been used in months, and sometimes even years. But even more dangerous is a stale account that a hacker or disgruntled ex-employee finds is still active. Because now they can use that account indiscriminately for an extended amount of time with no one noticing. If a hacker gets hold of an active account then the main user will typically notice fairly quickly. But if it is an account left from a former user then no one typically will notice it's use, at least for quite some time.
The IDM system will use something called drivers or connectors to interconnect systems. These drivers are configurable with what information they will share. You might have one system that you only need the user ID, first and last names, and the users managers name. Another system might need a dozen or more attributes on the user. You might actually need to modify the format of some of the information on it's way to the other system. Drivers are able to be adjusted accordingly. Most IDM systems can also be set up so that they will enforce rules like an identity can only be created in a single place. If they see an account created on an end point system the driver can be set up to reject or roll back the change on the end point system and notify the identity security team of the unauthorized attempt to create the account. It is important to consider where you want the authoritative identity source to be, and how you want to handle possible breaches. You also need to document what information needs to be where so the drivers can be properly configured.
IDM and workflow
So we have been talking at a high level so far about how IDM works. You initiate a new user creation, a change in a user, or removing a user and the system takes care of the task. But sometimes there are complexities to user creations in systems. In larger organizations there are often approvals that need to happen for certain accounts to be created or granted particular rights. This is where workflows become important. A top notch IDM solution will have at it's core the ability to create robust workflows. But what is a workflow? Think of a workflow as a series of rules or actions that need to be accomplished for the action to complete. The workflow is initiated and it will evaluate events, like approvals, and then take appropriate action.
So let's say you have a user that has been moved up to an entry level management position. In their new role they need to be able to access the HR records for all of the people that now report to them. They also need to be able to approve vacation requests and time sheets in the HR system. So the workflow is initiated to allow that access. The workflow might be kicked off by an assistant in HR or someone on the help desk. This person does not have authority to authorize the new rights. So the workflow knows it needs approval from another person. It will see the need for the approval and send a notification to that person that an approval is needed. Once the person approves the request then the workflow will proceed with granting the access to the individual and send them notification that they now have the appropriate access. However, what happens if the approver does not respond right away? Well the workflow might have an escalation rule set that says if the approval is not granted in say 4 hours then a notice will be sent to the approvers manager to let them know that the approval is waiting a response and has passed the initial SLA. Depending on the workflow the manager might then give approval, or it might allow for the manager to contact the approver to find out what the hold up is.
The thing with workflows is that they can be as simple or as complex as the organization needs. One suggestion is that you want to keep the workflow as simple as you can and as complex as you need it to be. Sometimes people make overly complex workflows and they become more of a burden than a help. The more complex anything is the bigger the chance of failure. This is as true in workflow design as it is in any other systems design. Strive for simplicity using the complex abilities only when truly needed.
When you create a workflow the first thing you should do is to determine the flow on paper of what you need. What is the end point that you need to achieve? What approvals do you need in the workflow. What is the SLA (service level - or timeframe) for the tasks at hand? Who does it need to escalate to? Once you have the flow down on paper then it is a simple job to build the rules into the workflow in the IDM system. Using a flowcharting program like say Visio is a huge help for this. And that flowchart can be added to any documentation you create. It is also a very good idea to document the workflow as you create it and put the documentation in a common depository for the entire IDM team. That way if you need to adjust the workflow down the road you can go back to the documentation and see what the requirements were when it was initially designed.
IDM and roles based security
So workflows allow us to assign users to the system and grant them rights. Good IDM systems will also simplify the granting of rights through the use of roles. What is a role? Well think of your organization and the people in it. Most organizations have fairly standard job titles for different positions within the company. These positions will have a defined list of tasks and responsibilities they have to accomplish. So a receptionist will have a certain, fairly limited, set of tasks and systems they need to access. It might be email, and a corporate directory, and the basic company internal social media site. We could call this a basic role. A help desk technician needs to do certain things and access certain additional systems. They might need access to the servers that hold software to install. Or they might need access to a licensing database for software. They also need access to the basic role. And a manager would need to some parts of the HR system so they can approve vacations, see pay levels for their employees, and recommend raises. Each of these sets of tasks and rights can be grouped together into roles. So if a person is a receptionist they get the basic role just like every other employee. A help desk person gets the basic role as well as the help desk tech role. The manager for say a development team would get the basic role and the manager role amongst some others.
So the best way to think of a role is to think of all the things this person does because of their position in the organization. It is sort of like groups in a way. The key for good role management is to think of all the things that anyone in that role might need that someone else in the role might not need, but if they get access it would not be a security risk. Again, we have the simplicity/complexity rule here. There is a tendency for organizations to make too many roles because they take the idea of least access to an extreme. There might be a system that not all help desk people will need access to. But there is a certain level of trust the organization has for help desk people that says they could be trusted to handle that information or access to that system. So maybe one help desk person does not currently help with system X, but they could just as easily be cross trained to help with that without changing their position in the company. There is no reason to deny the access to that system. So the role would be set up to grant access to systems A, D, and S, even though they currently only use A and S.
Down the road maybe the help desk person has gone to school and learned how to program. They get transferred to the development team for the internal Wiki. At that point their account would be adjusted to remove them from the help desk role and added to the Wiki development team role. The workflow for removal from help desk would remove the enhanced access to A, D, and S. The workflow to add them to the development team role which would then add them to the C and Q systems. Initially they might only use the C system, but as they learn more and are given more things to do in the team they start using the Q system also and they already have access to it.
Some final thoughts you want to consider when looking at an IDM solution. IDM solutions are very different in how they work. There are still some systems that will do things in a batch process methodology. This means you create, delete, or modify the user and a job is put in a queue that will be handled maybe once or twice a day. So it might be 12 to 24 hours before the user is created, deleted, or changed. Some systems might have a 15 minute lag in processing. The closer you get to real time processing the better the system. Most systems will handle some if not all tasks in near real time (nothing is instantaneous). The more responsive the system is the more secure your system will be. Also, the more productive your users will be because they will not have to be waiting for the system to process the request.
Third, what different platforms do you need to support? Are you only using Active Directory? Then you only need Active Directory connectors. Do you also need to support other LDAP directories? What about a SQL database that needs identity information? Do you want to include Linux systems in account creation? Maybe your a school that has a large influx of new users every fall where you need their account information brought in from a database system in the registrars office. So now you need an LDAP driver, a SQL driver, and a Linux driver as well. Make sure your IDM choice can support all the different platforms you need identity information on.
Fourth, another feature of a solid IDM system is the ability to track changes on users. Now you can have a solid audit trail on the systems. You might need this for SOX compliance, or for some other regulatory needs. Or someone on the system notices suspicious activity. Now you can track down where the account came from, or who changed it.
The initial implementation might seem a little overwhelming. You will need to talk with all the stakeholders for systems that need identity. However, once the system is implemented a good IDM system will streamline identity management, reduce overhead, and tighten security.
The news regularly has stories of security breaches on websites. Having worked with identity security for years, I have seen a lot of mistakes made that are really easy to avoid simply by making security an essential part of the planning process. Two things to consider in the security of the site are how you are going to store and secure the identities, and how the security model will be built and enforced consistently on the site.
Use a solid LDAP directory
Let's first tackle the identity store. This is the backend that will hold all the identity information on users of your site. Think about when you log into say Amazon, or Google, or your companies internal social media site. The website needs to know about you, and every other users. It will need to store user name and password at the very least. Often you will want to store mailing address and email address, maybe a phone number. For the company site you might store the employees department, their employee number, the managers name, and what location they work. All of this is stored in the directory. Often when I read of security breaches the cause was because of a SQL database breach. If you use an SQL database then you need to build all the security around the database from scratch.
There is a better solution than using a basic SQL database. There are directory products that are designed from the ground up to be secure identity stores. Some examples of these would include NetIQ eDirectory, OpenLDAP, and Microsoft Active Directory. Under the hood these are all databases. However, the structure of the database is designed specifically to store identities, and to easily set security rights to who can see what in the directory. And they have stood the test of time for being rock solid identity stores. Make sure you have a person versed in the particular directory product you decide to deploy. Like SQL, LDAP is a common language that is cross platform, this time specifically for access to identity directories. The directory will have a full suite of features for setting up security in the directory from who can access what information on different identities, to being able to have groups for people to be assigned to, and even how many user entries can be retrieved at one time. It also will have the ability to have multiple servers with the same information for automatic fail over in case of server issues.
Use a web security product to lock down the site
A lot of times web developers will write their own security on the site. They will write login pages, and use processes they write to enforce security. The challenge here is you have a person who's specialty is writing web pages trying to design a security infrastructure into that same site. In the process they have to constantly keep in mind all the different possible ways an attacker might breach the system and write programming to prevent it. The web designer needs to find all the possible holes in the site. The attacker only needs to find one way in.
The alternative is to use a program that is designed to protect the web site from outside of the website. Examples of this would be products like CA Siteminder, Oracle Access Manager, or NetIQ Access Manager. Using an access management solution moves the security workload out of the hands of web designers and into the hands of security engineers. The software is designed specifically to protect sites while at the same time making access as easy as possible for users. The software is designed to also give the ability for a single sign on experience in an enterprise so that a user can log into one website and subsequently access many other sites in the same organization.
When you are getting ready to deploy a new website you need to make sure to bring in your security engineers right up front to help with the design of the site. The structure of the site can significantly impact how easy it is to design the security model around it. It is much easier to secure a site if sections of the site are laid out in a specific way. There might be certain sections of the site which should only be accessed by a subset of the user population. If the pages of those sections are in their own subdirectory then it is very easy to design the policy to restrict that whole section of the website to a particular group or groups of users as listed in the LDAP directory. If those pages are scattered all over then it becomes much more difficult to secure.
So how does the access management software work? Well it sits in front of the actual website. When a person tries to get to the site then the AM software will intercept the request and evaluate the user trying to access it. The AM software will be responsible for requesting the credentials from the user and verifying them. Once they user is authenticated the AM software will let them in. That is the first step, authentication. Once the person is authenticated in the AM software could just let them in if they will have access everywhere. Otherwise the AM software will next take on the role of approving or denying authorization to the particular part of the system the user is trying to get to.
The main home page of the system typically will be available for all users of the system. So once a user is authenticated in they will simply be allowed to the site on the home page. They also will most likely be able to get to the help pages, the about pages, and the contact information pages. So there will be no need for authorization on those pages either. But there might be a section that is specifically for only managers. For that section you might create an LDAP group called managers. The AM product will then have a policy that will say only if you are a member of the managers group will you have access to this section of the website. Another section of the site might be limited to people in the finance and accounting departments. So the users could be in a group called finance, and the AM policy would be set to only allow users in the finance group to those pages.
As we look at these hypothetical examples you might start to see why good planning up front is important. If you can collect all the pages and code for a particular part of the site in the same space then as you create new pages all you have to do is put them in the same subdirectory and they will automatically be protected with the same protection as all the other pages. If you put all the pages in the same place, or spread them all over the site then you will have to specify each of the pages individually for the proper protection. As new pages are created the web designers would need to make sure to inform the access management administrators of the addition so that they can be granted the proper protection. The more difficult the security model is the more of a chance of creating holes that an attacker can breach. Complexity always leads to poor security.
Along with being the traffic cop over the websites of the organization, a solid access management software program will also give solid auditing features. In recent years monitoring access and being able to answer auditors questions has become more and more important. Also, if there is a suspected breach it is vital to know who did get in, and what they got access to. You need to be able to determine what information might have leaked out of the organization. And you need to be able to have the evidence to take care of the situation. You also need to know if someone is attempting to get to a place on the system they are not supposed to so you can respond appropriately.
Products like Siteminder, Oracle OAM, and NetIQ Access Manager will keep logs of all the authentications and authorizations that were approved, and where on the site they were allowed, listing who accessed what. They will also record any denied authentication or authorization attempts. There are even programs like NetIQ Sentinel that can monitor these logs and watch for suspicious activity. Obviously there will be times a person accidentally stumbles on the wrong page of a site and gets denied. However, if the log starts to show a flood of repeated authorization denials to a particular part of the site by the same user account, or a sudden increase in denials by a lot of accounts, then it is important to be notified of this and be able to react. So a program like Sentinel can monitor the logs and look for that suspicious activity. Then you can have rules in place for notification of the suspicious activity. You can set up for email alerts, pages, and even escalation if something is not dealt with in a timely manner.
It is important to also set up a process for storing those logs for an extended period of time. Some of the vendors will have add on products to deal with the logs over time. These products will parse the log files into some sort of database that can then be searched and reported on. Or it is often possible to have a script that would retrieve the logs and push them into a SQL database. Once there they can be easily searched and reported on. Now if you end up suspecting a breach by an employee you can go back and find proof where they were on the site. This will give you the solid evidence you need to deal with the situation and the user.
Some final notes on network design
When you are laying out the system there are a few things to consider in the design of the overall system. First is the placement of the LDAP directory. Typically you will not want your directory exposed to direct access by users. So the directory should be at least on an internal network segment behind a firewall. Typically the websites that are Internet facing will be in the DMZ, or possibly behind a proxy. There are several ways to set that part up. But the LDAP servers need to be behind that. It is not a bad idea to maybe even put the LDAP servers in their own network island behind a firewall inside the internal network. If you want users to access directory information they should use some sort of website that would offer that up. Often they will use the directory in their email program. But there could also be a protected website (yes using an AM product to protect it) that would present them with access to the directory. Identity information is very important to protect from access. The LDAP servers should also be set up to prevent wholesale extraction of data too. It is possible in most LDAP directories to set up servers so they won't return more than a particular number, say 100 entries, to a wildcard search. It is also possible to set up a timeout limit too. Think of all the important information being stored in the directory. This will help prevent your organization ending up on the news. Some people might have a need to get reports on the entire directory, and for that you can set up a single server in the cluster of servers that would allow that access. Again, limit access to that server to a very specific set of people.
Another thing to consider is the setup of the different parts of the AM solution. Many of those solutions have multiple servers or parts that make up the solution. Put only the parts that you need to into the DMZ. The rest of the solution should be on the internal network segment. The guiding rule should be least access. You want to make sure that the most critical parts of your infrastructure are better protected. So for example with Siteminder you will want the policy servers on the internal network while the web agents are of course on the web servers in the DMZ. You will want to limit access to any identity and access management system to the smallest group of people possible.
There is a new wireless networking standard out that gives impressive improvements over the older technology. The standard is called 802.11ac. With all the new multimedia uses of wireless networks demanding more and more data be streamed in real time the speed increase could not come fast enough. We have moved beyond the days of simply viewing static websites with some images on them. More and more people are watching YouTube videos, listening to audio streams from Pandora, Spotify, or Apple Music, or watching movies and television shows on things like Hulu, Netflix, and Amazon Prime. Gamers want low latency connections for real time gaming too. The new 802.11ac wireless networking can affordably bring this to you.
802.11ac works in the 5 GHz frequency. This allows for faster throughput. It is also fully backward compatible with the earlier 802.11n and 802.11g technology. The 802.11n specification had a maximum theoretical throughput of 300 Mps. The newer 802.11ac will go from 433 Mpbs up to several gigabits per second. This means that 802.11ac networking is faster than a USB 2.0 connection, and potentially just as fast as a USB 3.0 connection. Now wireless access to NAS, or network attached storage, can be just as fast as a wired connection to those same hard drives. As people move more and more from using desktop computers to laptop computers NAS storage becomes more and more useful. You can be anywhere in your home or office and have access to massive amounts of storage that is also very fast to access.
How 802.11ac Works
So how do they pull off all of this magic? Well first 802.11ac works exclusively in the 5GHz spectrum. The 2.4 GHz spectrum is way overcrowded with all sorts of wireless devices from baby monitors to security cameras, as well as older standard wireless networking. And even 802.11n used both 2.4 GHz and 5 GHz. So the engineers realized they needed to stay exclusively on the more wide open road of 5 GHz to push all that data over.
Second, 802.11ac uses up to eight spatial streams (MiMO technology). 802.11n only was able to use up to 4. So this allows a much wider footprint to push all that data through. The channel width for 802.11n was 40 MHz wide. The 802.11ac standard is 80 MHz and those can be doubled to 160 MHz. So you are looking at a difference of 4x40MHz compared to 8x160MHz.
Third, 802.11ac implements beamforming. The 802.11n standard had it but it was not standardized between vendors. What beamforming does basically is allows the router to focus where it is transmitting the power so that more of it gets to your device. It basically looks for your device and focuses the energy your way. This can boost throughput and improve signal quality.
Obviously to make use of the new standard you will need a new 802.11ac router. Your networked devices will also need to support the 802.11ac standard. For desktop computers you can simply replace the existing network interface card. For laptops you would need to get some sort of USB attached network adapter. You would have to research about if there is a way to upgrade other devices on your network. Rest assured that newer devices will be coming out that support 802.11ac wireless networking. If you are buying new network attached devices you should make sure to ask if they support 802.11ac wireless networking!
If you have never used an LDIF file before to manage a directory then you are in for a treat. If you are looking for techy information on how to build out an LDIF, including examples, then just look a little lower here.
So what is an LDIF file? An LDIF is a text file that you can feed into a directory to add, delete, move, or modify objects in the directory. You can adjust users, groups, and containers in the directory. If you are doing a single user then it might be easier to use any GUI tool you have available to do that request. But if you are doing a number of users, or need to do the exact same change on a number of trees, then an LDIF will make your life much easier. This is also a really great way to level set the information in different trees or on different objects in the same tree that should all have the same attribute settings.
When you do a command line LDAP search on a directory the results that the command gives is in LDIF format. If you redirect that output to a file then you can use that as a base or template for building out your LDIF file. I will give some examples of this at the bottom of the page. Keep in mind though that the ldapsearch command output is limited to a max of 80 characters across (and a lot of implementations go only 78 characters). So if you output a listing where some attributes give more than 80 characters of information then you will have line wraps that you need to deal with on creating an LDIF for input.
Several things to keep in mind with LDIF files. First, just like ldapsearch LDIF commands are case insensitive. You can use upper case, lower case, or mixed case. As far as the attributes themselves though, however you type them into the LDIF is how they will show up in the directory, and how they will display on output of a search. So if there is a particular need for case in the attribute you need to be mindful of that in creating your LDIF. Sometimes certain programs are written expecting a certain case for the attribute, and you need to put it in using the same case. Some people will use mixed case in a way to make names easier to read. If you want that then put it in that way in the LDIF. But the ldapmodify program will not care what the case is that you use.
Any line that starts with the pound or hash sign is considered a comment and will be ignored when the file is processed.
# This is the comment line and can be used to section things or note info
Trailing Spaces and Line-folding
You will want to make sure that there are no trailing spaces on any lines. The only time that trailing spaces are allowed is when you are line-folding an entry. If you have information that is going to be on two lines for a single attribute entry then you need to have a trailing space on that line and a single space at the beginning of the next line. The LDIF parser will then remove the spaces and concatenate the two lines.
End of record and multiple modify option separators
One other big gotcha that a lot of people get hit with is lacking a blank line at the end of each record. Every entry in an LDIF file needs to have a line with no spaces to end it. It is just a blank line with a carriage return. The file needs to end with a final clean carriage return too. You also need to be careful if you create the file in an editor on a Windows computer and then copy the file to a Linux or Unix server to run. It is possible you will not have the proper carriage return codes in the file for the *nix system since Windows uses different codes for carriage returns than *nix by default. A lot of text editors on Windows have a setting for using *nix returns.
If you are doing a modify on one or more records and are doing multiple operations on each object you need to separate the different operations with a hyphen. Look below to the modify section for examples of this.
The Big DN
Every item or object in the directory that you are going to modify is known by their DN. This is the fully qualified name of the object in the tree. This will be the first line for any record in the LDIF file that you are updating. The DN will be in LDAP format. The line will start with dn: and then have the full DN of the object. This is NOT case sensitive. If you are dealing with containers then the beginning of the DN will not start with cn= but will start with the type of container like ou= instead. Of course, make sure there is no trailing spaces in the line, just like all other lines or you will get an error.
Pushing the LDIF into the system
There are several ways to push an LDIF file into a system. Probably the most generic is to use the ldapmodify command. This is available on all *nix and Mac systems. You can find it for Windows too. If you are going against AD then you can download Microsoft's LDIFDE program to push the file in too. Some other environments have additional tools they make available to push the file into the system. The most basic ldapmodify line to accomplish this is:
ldapmodify -h hostname -D FQDNAdmin -w password -f filename.ldif
If you use this and there is a record that has an issue and throws an error then the processor will stop pushing entries and throw an error on the screen. If you want the program to keep processing records beyond the one that has the error then you can use a -c in the command line too. I will show an example later where this is very handy on a modify.
Note on system differences (and disclaimer)
LDAP and LDIF are supposed to be standards. But not everyone implements everything exactly the same way. You will find some slight differences between implementations. It is always good to test your LDIF files in a test directory before jumping in and going all gung ho in your production system. It is expected that you will use all due diligence in making sure you have a good LDIF file and will be updating your particular system the way that you are expecting to update it. One way to know how your particular system handles ldifs is to do an ldap search and export the information to a file. Command line ldap search programs will output in ldif format.
Adding objects to the system
If you want to add new users, groups, organizational units etc. then you need to perform an add operation. You designate the change type of an LDIF record with the changetype: line that goes right below the DN of the object you are adding or modifying.
With an add you need to make sure that you include the objectclass of the object you are creating. For most systems you should only need to specify the base class of the object and the system should fill in any additional entries. But I don't like leaving that to chance. So my recommendation is to do an ldapsearch for an object like the type you want to add and then copy the objectclass section right out of the export. That way you get all the objectclass lines for the new object.
Second, you MUST put in all required attributes of the object. You don't have to list any optional attributes unless you want to populate them. But you need to have the required attributes specified. This is something that will be different on various systems and platforms. Go into the definition of your class in the schema to determine what is required for any particular object class.
So let's make a couple users now. Let's make Clara Oswald and Bob Cratchit.
fullname: Clara Oswald
fullname: Bob Cratchit
Each attribute gets a line that starts with the name of the attribute ending in a colon followed by a space then the value you want in the attribute. If the attribute is a multivalued attribute and you want to put in multiple entries then simply use the attribute name several times and it will add each entry to the array in the attribute. If you have a binary entry then you will need to encode it in Base64.
I would recommend you export a user in ldif format using ldapsearch to see all the attributes that are populated for them. This is especially true with what objectclasses they should have. Most systems will not export the password. This will help in knowing how to format the entries for the new users to add.
Two things to note here too. First, the uid and the cn are different attributes and can be set to different values. Often they are set the same, but nothing prevents you from setting them to different values. And since they are different attributes they both need to be set.
Second, and this is a little known bit of information, the cn attribute is actually a multivalued attribute (at least on the systems I have tested it on). Typically you will only ever see a single value in it. But it is possible to stuff more than one in there. Most programs will only ever read the first value in the array (multivalued attributes present to programming languages as arrays). So if the one you really want is the second or third in the array you could find some really odd behavior in your program. This becomes more important in the modify change type that we will talk about below.
You can add a container in basically the same way. Let's do an OU.
Pretty simple eh? List out the attributes and the values for each one below the first two lines of the record, make sure there are no trailing spaces, and that you have a clean blank carriage free line at the end of each record in the file. If you are setting up a test tree and want a large number of records to test with then you can use a bash script, VBScript, or PowerShell program (or language of your choice) to generate the file using some sort of cool algorithm to generate all the names and passwords, then simply push the file in.
Often companies or organizations need to bring on board a number of people at the same time. Using ldif files can make that job much easier. In a future post I will show a vbscript that you can use to help easily make the ldif file to create users based on an Excel spreadsheet of users. It is often easy to get an Excel spreadsheet of the new users.
A future post will also show how you can modify existing users and groups and to delete objects in the tree.
I am truly a geeks geek. I have worked in computers for over three decades. I have worked on mainframes, Unix systems, Linux before almost anyone knew what it was, and many other systems. I love computers, and love making them do things people think is impossible.