I wrote up a post on this topic a few months ago, named “Lync 2013 Mobile Client deployment – field notes“.
This has somewhat proven to be one of the more popular posts on my blog, both regarding stats and comments/questions. I have promised to update it to reflect some more recent experience and knowledge on the subject, but haven’t had the time until now (vacation time) to do so. But finally, here goes.
I’d like to start off by pointing out the excellent blog post from MVP Jeff Schertz on the same subject, and for someone wanting to delve into the same level of detail that he provides it is really worth the read.
Myself, on the other hand, will deliver more of a “practical overview” so to speak. To clarify some of my earlier points, and to point out some changes.
So to the point: Mobility for Lync, starting with the Lync Server 2013 CU1 (Feb 2013) release, really changed a great deal from previous versions. The biggest change was UCWA, the “module” that finally allowed for voice and video on the mobile client (and Screen sharing view-only as of July, at least for Windows Phone and iPad). With UCWA what also changed was how mobile clients connects with the Lync Server. Where the Lync 2010 mobile client could connect with both the internal (port 443) and the external (port 4443) web services of the Front End server, depending on where the client resided, the Lync 2013 mobile client will need to connect with the external web services. The way Lync is designed this communication is always established using HTTPS/TCP 443, so the traffic will need to pass through a reverse proxy or similiar service that will take this communication and forward it to the server hosting the external web services, now using port 4443 instead. This was an important point I tried to make as clear as I could in my previous point on the subject. To clarify and visualize, I have “borrowed” the figure from Technet referring to the “Technical requirements for Mobility”, as I find this as good as what I could come up with myself:
As you can see by the figure, even though when using lyncdiscoverinternal.<sipdomain> DNS record for lookup and actually connecting to the Lync pool directly from an internal network, the following traffic (registration, signalling etc) will all need to traverse the perimeter and connect to the reverse proxy in order to work (or it will fail). To clarify a somewhat erroneous point from my previous blog; you do not need to publish the lyncdiscover.<sipdomain> DNS internally, redirecting this traffic externally. Using lyncdiscoverinternal.<sipdomain> like you used to with Lync 2010 mobility is probably still a “best practice”, and why this will still work with the new UCWA way of connecting is because the token returned by the lyncdiscover service will contain both the internal and the external web services FQDN – and the Lync 2013 mobile client will know what to opt for.
This leads me to another important point regarding DNS. Since the Lync mobile client will try to resolve and ultimately connect to the external web services FQDN, this record has to be part of your internal DNS – at least if your DNS server is authorative for this namespace. In consequence, it is also worth mentioning that with “split-brain” DNS (i.e. your internal and external namespace is the same) it is necessary that your web services FQDN for internal and external use are not the same. How would the Lync client be able to connect to right one if they were?
One final point I would like to make only applies to migration scenarios, but still an applicable one. This one I owe to one of my colleagues, Tom-Inge Larsen, and his blog post on the topic. It really puzzled me that in a coexistence scenario, the external web services would not work for users homed on either the 2010 or the 2013 pool, depending on where the reverse proxy publishing rule was redirecting traffic. I found it kind of strange that Microsoft would make it like that, and force a complete cut-over migration for all users in order to have it working for everyone. Fortunately there is a solution, and to this day I can still not find this well documented by Microsoft, at least not within the Migration part of Lync 2013 Technet documents. What you need to do is define a separate FQDN for your Lync Server 2013 external web services. Next you create a reverse proxy publishing rule for this FQDN (also needing a certificate that includes this entry), and also update your external DNS with this record. This way, even if your simple URL’s are still pointing towards the legacy pool during migration, the Lync servers will be able to redirect traffic to one another. And as long as there is an externally available “path” to the new pool then conferences scheduled on the new pool will be available for external participants.
Ok, this post proved to be longer than first intended, but I like to put some clarity to the hard facts so to speak. Summing up in a few words what to consider when deploying Lync Server 2013 mobility:
- Point your lyncdiscoverinternal.<sipdomain> record in internal DNS to your Front End pool.
- Have the external web services FQDN in internal DNS (e.g. lyncweb.contoso.com) point to the public IP/interface of your reverse proxy, as the mobile client will need to resolve and connect to this.
- In a split-brain DNS scenario, also make sure that your web pool FQDN’s for internal and external services are not the same, as the mobile client will need to know the difference.
- In a migration scenario from Lync Server 2010, make sure that you plan for a separate FQDN for your external web services (not the same as your Lync Server 2010 external web services ), including certificate and also public IP if necessary. Then publish the Lync Server 2013 web services externally just the way you did with the legacy pool.
That concludes my first blog in some time, hope that will be helpful to somebody!