We’ve been seeing a certain issue arise for random users in the company where I’m doing support currently. It would run something like this. Someone would leave to work from home or on a business trip, they would restart their laptop, and they would lose all of their mapped drives.
Some of the time things of this sort just happen. It is Windows after all. Do the same thing twice and get two different results. That’s not shocking. But it was forming a pattern. It was, shall we say, spreading.
I decided it wasn’t an issue each of these machines was having but rather something more systemic. I asked to look at the login script responsible for mapping drives. The original script looked something like this:
net time \\[our-dc] /set /yes net use M: /delete:yes net use P: /delete:yes net use S: /delete:yes net use T: /delete:yes net use V: /delete:yes net use M: \\[a-server]\[some-share] net use P: \\[another-server]\[a-different-share] net use S: \\[we-have-many-servers]\[and-lots-of-shares]
(We actually have several similar scripts depending on the role of the user, but they were all similar enough to this one. Also, don’t worry about net time which merely calls out a time server. There was a syntax problem there was well, but I corrected it and the syntax you see above is now correct. And, damn it, you are smart enough by now to recognize that the stuff in the [brackets] is for substituting.)
I was suspicious because I typically use the persistent argument and this script was thus lacking. So I did a little research into the matter. It turns out that the persistence flag is itself persistent. This means that if it is set to yes it is yes until something else changes it. As such a script as you see above can fail if something (anything) happens to alter the persistence on a system to no. Once that system is then rebooted, all drive mappings (since they are still set to persistence=no) will vanish silently leaving the user wondering what happened (and calling into technical support to have it corrected).
That’s easy enough to fix. Include persistence.
As you might notice from the script above there are mapped drives being called out for deletion which are not being mapped. I wanted to also capture that in my replacement script. This is what I wrote:
net time \\[our-dc] /set /yes net use * /delete:yes net use /persistent:yes net use M: \\[a-server]\[some-share] net use P: \\[another-server]\[a-different-share] net use S: \\[we-have-many-servers]\[and-lots-of-shares] net use /persistent:no
This version deletes any mapped drives (using the * wildcard). Then it sets persistent to yes and maps three drives. Finally it reverts persistence to no so that any other drives mapped by the system will fall off at reboot (unless they are specifically set to yes). This really covers all the bases and ensures that drive mapping is kept very clean. More importantly it ensures that these mapped drives will always remain persistent regardless of reboots or user locations.
My boss didn’t like the idea that the script would delete all mapped drives and he didn’t like the idea of leaving persistence set to no (in case users wanted to map their own drives persistently). So I altered the script again to satisfy those requirements:
net time \\[our-dc] /set /yes net use M: /delete:yes net use P: /delete:yes net use S: /delete:yes net use /persistent:yes net use M: \\[a-server]\[some-share] net use P: \\[another-server]\[a-different-share] net use S: \\[we-have-many-servers]\[and-lots-of-shares]
Now you should have enough to write some very fine, proper, and best-practice login scripts. Conversely you could write some notoriously bad scripts which plague users with bizarre behaviors and which flood your help desk with strange calls. Either way, have fun with that.