The Absolute Greatest White Paper on Intelligent Speaker Ever.

By Michael Tressler

Sr Solutions Consultant, Jabra

BA, Software Engineering, Ball State University

Contents

About the Author 4

Preamble 5

Introduction 6

What is Intelligent Speaker? 7

What makes a speaker microphone device an Intelligent speaker? 8

Why is this even a thing? 8

What are the requirements for Intelligent Speaker? 10

Hardware requirements 10

Software Requirements 11

Network Requirements 11

Licensing Requirements 12

Other stuff 12

How do I set up Intelligent Speaker? 13

Configure the hardware 13

Configure Teams Rooms on Windows 15

Create/Edit Teams Meeting Policies 17

Creating Teams Rooms Policy 18

Creating the End User Policy 21

Assigning Teams Rooms Policy for n00bs 21

Assigning Teams Rooms Policy for 1337 h4x0r 24

Assigning End User Policy for n00bs 24

Assigning End User Policy for 1337 h4x04 25

Digital voice profile 27

Biometric Privacy with Intelligent Speaker 27

Set up your digital voice profile 28

How to use Intelligent Speaker in a meeting 31

Editing mistakes 34

Troubleshooting 35

Summary 36

About the Author

Michael Tressler is a Senior Solutions Consultant at Jabra. He focuses on enabling video sales in the channel via education, training, and awareness with our partners. He is closing in on his first year with Jabra.

Prior to Jabra, Michael worked for 6 years at Microsoft, with three of those years exclusively focused on Microsoft Teams devices such as Teams Rooms on Windows and Teams Rooms on Android. 

Michael has trained thousands of partners and customers on Teams Rooms on Windows and Teams Android devices.

Michael is a moderately proud graduate of Ball State University – best known for graduating David Letterman (so the standards at that school are…let’s go with inconsistent).

You can follow Michael on Mastodon via @flinchbot@twit.social.

Preamble

In November 2023, at Microsoft Ignite, Microsoft announced that all Teams Rooms certified microphones will be able to do speaker attribution. [1] However, this does not mean the end of value for Intelligent Speaker. In a blog post, Microsoft believes that Intelligent Speakers will outperform “non intelligent” devices.

“While we’re delighted to extend the capability of speaker recognition to more rooms, it’s important to note that the quality may not match that of an intelligent speaker device. So, it’s essential to evaluate the advantages of incorporating an intelligent speaker, particularly in crucial spaces where attaining the highest quality transcription and attribution is vital.” Get more out of hybrid meetings with Teams Rooms and Copilot – Microsoft Community Hub

Also, when used in BYOD (Bring Your Own Device) mode, the PanaCast 50’s Intelligent Speaker certification is still important. In BYOD mode, if the PanaCast 50 is directly connected to a Windows PC or laptop, the PanaCast 50 will provide speaker attribution.

From the Microsoft announcement[2]:

“Add intelligence to BYOD meeting rooms
Intelligent speakers, previously only available in Teams Rooms, will now also be supported in BYOD meeting rooms. Devices like the Jabra PanaCast 50, or other Certified for Teams intelligent speakers, ensure in-room participants maintain their identity in the meeting transcript, enhancing AI-based productivity tools such as intelligent meeting recap and Copilot in Teams. This will be available by the end of 2023.” 

What does this actually mean? Two things.

1. The Teams desktop client can be used to capture name attribution. Prior to this, only the Teams Rooms app could do this.

2. Educated guess: The requirement for Intelligent Speaker when connected to a laptop or desktop is an attempt to prevent the possibility of end users using terrible microphones (think laptop microphones) and expecting a reliable transcript to be created. To minimize the problem of people using trash microphones with transcription and expecting speaker attribution to work, Microsoft added the requirement that you must use one of the certified Intelligent Speakers.

 

Note: Everything said in this paper about the Jabra PanaCast 50 also applies to the Lenovo ThinkSmart Bar 180, as they are essentially the same device with different branding.

Introduction

Microsoft announced Intelligent Speaker at Ignite in March of 2021[3] and it went into preview in the second half of that year. At initial release, EPOS and Yealink[4] were the only two manufacturers to produce Intelligent Speaker certified devices.

In the blog announcement from Microsoft, Intelligent Speaker was defined as such: “…allow attendees to use the transcription to follow along or capture actions, by knowing who in the room said what. Whether you are working remotely or following the meeting in the conference room, you can effectively see who said what during the meeting.[5]

Cool. What does any of that mean?

Why does this thing exist?

How do you set it up?

Any security issues with this?

Wait – I thought this was about identifying the person talking, so shouldn’t it be called “Intelligent Microphone”?

These questions, and many more, will be answered in the following beautifully worded paragraphs.

What is Intelligent Speaker?

Intelligent Speaker, at its core, is proprietary Microsoft technology to uniquely identify a person’s voice to have accurate speaker attribution in areas such as meeting rooms. Put another way, when multiple people are speaking in a conference room, and the transcription feature in Microsoft Teams is enabled, how can the sentences and words of each in-room attendee be attributed to them as opposed to being generically attributed to the meeting space?

Here is a sample transcript of a user speaking in a conference room, appropriately named “Epic Conference Room of Awesome”. Note the sentences from this snippet of the transcript are not attributed to a human, but rather to the conference room.

Figure Donuts are delicious

First off, how many people are in this conversation? 1 person talking to themselves? Two people? Three?

Let’s say there are two people in this conversation. Who said what? Am I bringing the donuts[6] or is the other person bringing them? Who even is the other person? 

Wouldn’t it be swell if the transcript showed the name of the person who said each sentence instead of just the room name?

What makes a speaker microphone device an Intelligent speaker?

There are some hints of what makes Microsoft certified Intelligent Speakers so fancy. But not much. It is mentioned that Intelligent Speakers include a 7-microphone array[7] to help identify the voices of up to ten people in a meeting room. Little more is given regarding hardware requirements. The Jabra PanaCast 50 (P50) has 8 beamforming microphones, so I guess that’s good enough!

Beyond the hardware, there are the Microsoft services on the backend that really provide the magic powers. Microsoft says that they are leveraging the powers of Microsoft Graph to “…provide[s] access to rich people-centric data and insight in the Microsoft Cloud to contextualize the transcription. For example, because we know who the speaker is, the acronyms, names of colleagues, and different words the speaker uses can be more accurately transcribed.[8]

Word.

Not mentioned is all the other magic that needs to happen. For example, it must be able to match up a given voice to you. Or me. There is an audio-matching algorithm that must do this. And then a Speech-to-text service too to convert your spoken words to text so that it can be accurately written to the transcript. And then there must be a way for us to manually fix mistakes (if we care enough).

As of October 2023[9], there are now six hardware devices that Microsoft has certified to use the Intelligent Speaker feature.

  1. EPOS Capture 5
  2. Yealink M Speech[10]
  3. Sennheiser TeamConnect Intelligent Speaker
  4. Jabra Panacast 50 (YEAH BABY!)
  5. Yealink SmartVision 60
  6. Lenovo ThinkSmart Bar 180 [11]

Why is this even a thing?

Why does Intelligent Speaker even exist? Heck – have you ever been on a meeting with transcription enabled?

Me either.

So why? I’ll tell you why: Because it’s cool technology!

OK, that’s not why. The why is boring and I’m trying to pep up this section. And I think I’m failing. And now I’m just wasting your time. So here we go:

Regulated Persons.

There you go. Good times.

What’s a regulated person?

According to some random website called Law Insider, “Regulated Persons means certain broker-dealers and registered investment advisers that are subject to prohibitions against participating in pay-to-play practices and are subject to the SEC’s oversight and, in the case of broker-dealers, the oversight of a registered national securities association, such as FINRA.[12]

Put another way, these people have all their communications logged, tracked, and recorded so, should a legal issue arise, they can claim their innocence. Hopefully. Otherwise: jail time.

So how do you track everything someone says when they walk into a common space like a conference room? Hello, Intelligent Speaker.

This has been the primary use case for Intelligent Speaker since its launch. And as such, this has been a niche feature that most Teams Rooms admins either have never heard of or have ignored because there is no need for it.

BUT THAT IS ALL ABOUT TO CHANGE!

Let me introduce you to my little friend – Copilot! Copilot! Copilot!

(Are you surprised it took me this long to get into the hip AI topic of the day?)

What happens if you throw AI, erm, Copilot at a transcript? It can quickly summarize it, pull out notes, and even put together a list of tasks derived from the meeting. And a transcript of a meeting room with proper attribution for Copilot to ingest? That’s like the greatest thing ever.

Now Copilot can summarize tasks like “Michael agreed to buy the donuts” instead of “Conference Room A agreed to buy the donuts”[13].

And now this Intelligent Speaker feature becomes far more than a niche feature for a handful of regulated persons. It becomes a potential game changer for office workers around the globe.

What are the requirements for Intelligent Speaker?

We now know that Intelligent Speaker will save the planet. Or something like that. How does one set it up? What are the requirements? There are quite a few and I’ll start by setting up the administrative side and then show you how to set up the end-user side.

Hardware requirements

I’ve already mentioned the six supported Intelligent Speakers above. But here they are again with pictures. I’m doing this to drive home just how different the Jabra PanaCast 50 (and the Lenovo ThinkSmart Bar 180) is than the other Intelligent Speakers on the market.[14]

EPOS Capture 5

EPOS Intelligent Speaker Microsoft Teams Rooms Meeting Transcription VOI - Black - Picture 1 of 4

Jabra PanaCast 50

Jabra PanaCast 50 - Video conferencing device - Zoom Certified, Certified for Microsoft Teams - black 1

Lenovo ThinkSmart Bar 180

Sennheiser TeamConnect Intelligent Speaker

Yealink MSpeech

The Yealink MSpeech intelligent speaker

Yealink MVC S60 (Maybe)

Two are an all-in-one video bar with industry leading video and audio. Three of them are speaker pucks and the other is also a center-of-table device. Can you spot which one is the best option?[15]

The second hardware requirement is that – as of this writing in October 2023 – the Intelligent Speaker is only available on Microsoft Teams Rooms on Windows.[16] This won’t work on Zoom Rooms (Windows or Android). And it won’t work on Teams Rooms on Android. The Android based Jabra PanaCast 50 Video Bar System cannot do attributed captions. This may change at some point, but until then, if someone is interested in deploying Intelligent Speaker, Microsoft Teams Rooms on Windows is the only option. However, as mentioned in the preamble, this will  change in 2024 to permit “Unintelligent Speakers” to do speaker attribution.

And that’s about it for hardware. You need a Microsoft-certified Intelligent Speaker and Teams Rooms on Windows. (For now).

Note: If someone is still using a Logitech SmartDock running Teams Rooms, sell them a more modern Teams Rooms implementation. Remind them that Intelligent Speaker is not supported on those ancient things due to “… a known issue that Teams Rooms can’t recognize the Intelligent Speaker through the dock.”

Software Requirements

The software requirements for Intelligent speaker are straightforward. You will need a Microsoft Teams Rooms on Windows installation, connected to the Intelligent Speaker of your choosing PanaCast 50. You also need to set the P50 as the default speaker and microphone within Teams Rooms. For Intelligent Speaker to work, it must be the default speaker and microphone.[17]

Network Requirements

The network requirements for Intelligent Speaker are the same as for any Teams Rooms on Windows installation with the exception that when using speaker attribution, you need 7Mbs of available upload bandwidth. [18] On the nerd side, the seven microphones send seven streams to Microsoft that adds a maximum of 1Mbs per audio stream which would be a maximum of 7Mbs. Once the audio streams reach Microsoft, magic happens, and voice matching is tried in the cloud.[19]

When Microsoft adds support for Unintelligent Speakers, the required bandwidth should drop to 1Mbps.

Licensing Requirements

The Teams Rooms resource account needs a Teams Rooms Pro license assigned to it. Speaker attribution is not supported on the Teams Rooms Basic license.[20]

Note: If a customer is still using the legacy Teams Rooms Standard or Teams Rooms Pro license, Intelligent Speaker features will work with both licenses.

Other stuff

This feature is available in all countries and regions, at least as Microsoft defines them. That does not mean that all languages and locales are supported. See this list for a list of supported locales[21].

Beyond being available in certain locales, there are legal ramifications to using Intelligent Speaker. For Intelligent Speaker to work, users will have to give up some biometric information (i.e., their voice print). Some nations, principalities, city-states, and other political boundaries may have an issue with this. Check first if this is legal to be used where you intend to set it up.

Second, if it is legal to be used, verify with your company’s legal team if it is legal (or desired at all) in your organization. Some companies like plausible-deniability and not having a transcript sure helps avoid some of that pesky legal paperwork that needs to be handed over in a lawsuit. Or they just really value their employee’s privacy.

Assuming the points above are cleared, the Microsoft Teams administrator then needs to create meeting policies that explicitly enable the voice attribution feature. Depending on how they do it, this will apply to all users of that Microsoft 365 tenant (editing the Global meeting policy) or they can be more tactical and create a custom policy and only assign the policy to users willing to give up their voice print for the common good.

Which leads to….you must hope your users record their voice prints. It is completely voluntary for them to do this. It’s generally bad form to force an employee to give over personal biometric data like their voice[22].

How do I set up Intelligent Speaker?

This gets a little tricky but if it were easy, I wouldn’t be writing this.

Configure the hardware

The first thing is to make sure your Intelligent Speaker is on the latest firmware. Make sure your Jabra PanaCast 50/Lenovo ThinkSmart Bar 180 is on firmware version 6.22 or later. You also need to have Jabra Direct[23] version 6.11.28601 or later.

Connect your PanaCast 50 to a computer and start Jabra Direct. Once the P50 is recognized by Jabra Direct, click on it to get to the settings.

A screenshot of a computer

Description automatically generated

On the screen that appears, click on Settings to get to the good bits.

A screenshot of a computer

Description automatically generated

From within Device settings, scroll down until you see the Playback device type setting. Hit the drop down and change it from “Communication device” to “Microsoft Teams Rooms device”.

A screenshot of a computer

Description automatically generated

Click Save at the top and then reboot the P50. The PanaCast 50 is now ready to be an Intelligent Speaker.

Configure Teams Rooms on Windows

After your P50 reboots, you’re not quite done. You now need to verify that the setting was successfully applied and that the P50 is set as the correct output device within Teams Rooms.

Go to the Teams Rooms on Windows console and tap More.

A screenshot of a phone

Description automatically generated

On the next screen tap Settings.

A screenshot of a computer

Description automatically generated

You are then prompted to sign into Teams Rooms with administrative credentials. Enter the administrative password to move on to the next step.

From within Settings, scroll down to the Peripherals section.

A screenshot of a computer

Description automatically generated

Finally, set the Audio settings for Teams Rooms. For Microphone for Conferencing, select the PanaCast 50 that has UAC2_TEAMS in the name, as shown in the image below. (For a ThinkSmart Bar 180, the name will be different, but the (UAC2_TEAMS) will be the same).

A close up of a sign

Description automatically generated

Set the Speaker for Conferencing to the PanaCast 50 that has UAC2_Render in the name. This is shown in the below image. (For a ThinkSmart Bar 180, the name will be different, but the (UAC2_Render) will be the same).

A black and white text

Description automatically generated

Set the default speaker to the same thing you set above – the (UAC2_Render) device.

At this point you’ve completed the easy part from the admin side. Now we need to create some meeting policies.

Create/Edit Teams Meeting Policies

Up until now, this has been straightforward and anyone with a laptop, a cable, and a PanaCast 50 handy can do this work. At this point, things change. In most organizations, you now need to bring in your Microsoft 365 administrators as you need to edit or create new policies to apply some custom settings.

There are two ways to do this:

  1. Edit the Global Teams meeting policy.
    1. The advantage here is it’s global, so all user accounts will get this setting.
  2. Create/Edit a custom policy and only apply it to certain users.
    1. Generally, you should not edit Global policies and instead create custom policies. This isn’t the document to debate the pros and cons of policy creation and hierarchy. But in this paper, I’m going with this approach in that I will create a new Teams meeting policy.

The person creating or editing these policies needs any of the following roles assigned to them:

  • Teams Administrator[24]
  • Teams Communications Administrator[25]

You need to edit/create two new policies – one for the Microsoft Teams Rooms Resource Account[26] and one for end users.

Note: You could create just one policy covering both settings, but I’m going to show the most granular way to do this. How customers choose to implement these policies is wholly up to them.

The first policy is to enable the speaker attribute feature on Microsoft Teams Rooms. Note that you don’t set policies on the Teams Rooms device, you set policies on the Resource Account that signs into Teams Rooms and runs the meetings on the device.

First, I will create a policy called IntelligentSpeakerMTR that sets the value “roomAttributeUserOverride” to “Attribute”.[27]

There are three values you can set for “roomAttributeUserOverride”.

One is “False” which turns the feature off, another is “Attribute” which enables speaker attribution, and the third is “Distinguish” which tells the speaker to distinguish between different voices but to *not* provide name attributes for the transcript (e.g., “Speaker 1”, “Speaker 2” instead of “Alice”, “Bob”)

The second policy is assigned to the users that will be allowed to have their voices transcribed (aka, folks who aren’t bonkers over the privacy of their biometrics.). This policy will be called IntelligentSpeakerUser and I will set the values for “enrollUserOverride” to “Enabled” and the value for “AllowTranscription” to “True”.

What do these attributes set? Good question. Also – good to know you’re awake and have read this far. You, my friend, are an amazing human being.

“enrollUserOverride” is used “…to set voice profile capture, or enrollment, in Teams settings for a tenant.”[28] That’s a bit much as this isn’t a tenant level setting, but a user level setting. But whatever. It’s in Microsoft official documentation so it must be true.

If this attribute is disabled, the following happens (or doesn’t, depending):

  • Users who have never enrolled can’t view, enroll, or re-enroll.
  • The entry point to the enrollment flow will be hidden.
  • If users select a link to the enrollment page, they’ll see a message that says this feature isn’t enabled for their organization.
  • Users who have enrolled can view and remove their voice profile in the Teams settings. Once they remove their voice profile, they won’t be able to view, access, or complete the enrollment flow.[29]

We want to enable this. When enabled, you get all this awesomeness:

  • Users can view, access, and complete the enrollment flow.
  • The entry point will show on Teams settings page under the Recognition tab.[30]

The other attribute we will set is “AllowTranscription” which is obvious. You either allow transcription or you don’t. I want to allow transcript so I will set this to True.

Creating Teams Rooms Policy

Let me show you how to create these policies using Microsoft Teams PowerShell. You cannot do this using Teams admin center, which is the tool for total n00bs. YOU are not a total n00b are you? You are a 1337 h4x0r! We 1337 h4x0r5 use PowerShell!

At this point, if you are indeed 1337 h4x0r, do your thing. You don’t need documentation!

For those aspiring 1337 h4x0r5, I’ll walk you through this.

First, start PowerShell on your PC as Administrator (Pro Tip: use the Terminal app). If you don’t know how to start PowerShell, you can stop now and pass this documentation off to a more experienced administrator.

Once PowerShell has started, you need to make sure you have the Microsoft Teams PowerShell module installed. If you are unsure, run the following cmdlets[31] with these parameters:

Install-Module -Name PowerShellGet -Force -AllowClobber

Install-Module -Name MicrosoftTeams -Force –AllowClobber

Import-Module –Name MicrosoftTeams

A screenshot of a computer program

Description automatically generated

Now that you have the correct PowerShell module installed, you need to connect your PC to Microsoft Teams. You do this by running

Set-ExecutionPolicy -ExecutionPolicy Unrestricted

Import-Module –Name MicrosoftTeams

Connect-MicrosoftTeams from your terminal window. After entering this cmdlet, you will be prompted to sign in.

A screen shot of a computer screen

Description automatically generated

When successfully signed in, you get this wonderful feedback (your values will be different).

Now we can get to business.

The cmdlet needed to create a new Teams meeting policy is New-CsTeamsMeetingPolicy. This is the full command line I will enter:

New-CsTeamsMeetingPolicy -Identity IntelligentSpeakerMTR -roomAttributeUserOverride Attribute

Copy and paste that into your PowerShell session. After a few seconds you should get a raft of information back. You can scroll up and see if the change has taken effect. If you aren’t into playing a PowerShell version of “Where’s Waldo”, you can run this PowerShell command to see what the value is set to for “roomAttributeUserOverride”.

Get-CsTeamsMeetingPolicy -Identity IntelligentSpeakerMTR | Select “roomAttributeUserOverride”

If the value returned is “Attribute” then you are ready for the next step.

Creating the End User Policy

The second policy you need to create is the one you will assign to end users. Only end users with this policy assigned will be able to enroll their voice for speaker attribution. As above, open a PowerShell session and connect to your tenant in the cloud.

Below is the PowerShell command needed to create the policy.

New-CsTeamsMeetingPolicy -Identity IntelligentSpeakerUser -enrollUserOverride Enabled -AllowTranscription $true

Copy and paste that into your PowerShell session.

After you hit enter, a raft of information should go flying by. You can scroll up to validate the changes in this policy or run the following PowerShell to confirm the attribute you set.

Get-CsTeamsMeetingPolicy -Identity IntelligentSpeakerUser | Select “enrollUserOverride”, “AllowTranscription”

If you see “Enabled” and “True” then you are good to go.

Assigning Teams Rooms Policy for n00bs

Now that you have the Teams Rooms policy created, you need to assign it to a Teams Rooms Resource account. The perceived easiest way for a new administrator to do this is via Microsoft Teams admin center (TAC). This way you can click away with no mucking about with PowerShell.

To access TAC, open a web browser and enter admin.teams.microsoft.com into the address bar. If necessary, sign into your Microsoft 365 tenant.

From here navigate to the Users section and click on Manage Users

A screenshot of a computer

Description automatically generated

From here, you can either scroll down and find an account, or type in the account name in the Search for a user search box. In this case, I will edit the second account listed – “Conference Room – MTR1”. I click on the name in the Display name column to bring up the properties for that account.

Once I have the properties for that account, I click on Policies to see which policies are assigned to that account.

A screenshot of a computer

Description automatically generated

To change a policy, click on the Edit icon. This brings up the list of possible policies and their settings. Scroll down until you see Select Meeting policy.

A screenshot of a computer

Description automatically generated

After clicking on the drop-down list for meeting policies, you see all available options. Select IntelligentSpeakerMTR and click Apply at the bottom of the screen.

A screenshot of a conference room

Description automatically generated

You have now assigned this policy to the Teams Rooms Resource Account.

Note: After a policy is assigned, it can take up to 48 hours to take effect. To get the policy to take effect sooner, accounts must be signed out and signed back in.

Assigning Teams Rooms Policy for 1337 h4x0r

Grant-CsTeamsMeetingPolicy -Identity mtr.mtressler.1@jabrademos.com -PolicyName IntelligentSpeakerMTR

Assigning End User Policy for n00bs

I’ll make this quick.

The steps are the exact same as above – go to Teams admin center, find an end user, and change their policy to IntelligentSpeakerUser. Click apply and wait for the change to take effect.

The one difference is you probably want to apply this to several users at once and not assign the policy one at a time. To do this, click to the left of the names to which you want to assign this policy. A checkmark appears next to the selected names.

A screenshot of a computer

Description automatically generated

Once you have the names selected, scroll back to the top and click Edit settings.

A screenshot of a computer

Description automatically generated

From here, scroll down to Meeting policy and select IntelligentSpeakerUser, then click Apply at the bottom to apply the policy to the group of users.

A screenshot of a computer

Description automatically generated

Assigning End User Policy for 1337 h4x04

Grant-CsTeamsMeetingPolicy -Identity avance@jabrademos.com -PolicyName IntelligentSpeakerUser

Alternately, something like this:

Get-CsOnlineUser | Grant-CsTeamsMeetingPolicy -PolicyName IntelligentSpeakerUser

Digital voice profile

At this point, all the work is done from the administrative side. Now it is up to end users to record their voice profiles. You (legally, at least in most countries) can’t force people to do this. Sadly. 😊 However, once people see the benefit of this, they may volunteer to do this, but I would be surprised if you ever get 100% end user buy in. Some people are too lazy to do it and some people value their biometric privacy too much.

Biometric Privacy with Intelligent Speaker

What is the privacy story? Where is my recorded voice stored? Can anyone access it? Does it work cross-tenant? Those are all good questions and maybe I’ll answer two or three of those.

Your “…voice profile data is stored in Office 365 cloud with user content.”[32] What does this mean? I’m not exactly sure. Microsoft isn’t spilling any technical details. Here is another statement covering this topic:

“Voice data will be securely stored in the Office 365 Cloud, and users will retain control of their information, including the ability to delete it at any time. The capture of voice data can be turned on or off for each meeting. Additionally, admins have full control to turn on/off people identification through voice recognition feature across the organization.”[33]

Cool.

This is all I can add to this topic since Microsoft doesn’t have a strong public statement I can find: Biometric data stays within your tenant and is not publicly exposed. That’s all I got.

Admins can export the audio data[34] via Teams admin center.  If you go to a user who has made a voice recording, you will see an option to download the biometric profile. [35]

Audio data can only be used within your tenant. This means if you or someone in your tenant hosts the meeting, then Intelligent Speaker features will work (if enabled). If you walk into a meeting room at another Office 365 tenant, your voice profile data will not be used and you will show up as “Speaker X” in the transcription.

To clarify, Alice from Contoso has set up her voice profile. She is going to an in-person meeting with Bob at Northwind Traders. Both Bob and Northwind Traders have successfully set up Intelligent Speaker. In the meeting with Bob and Alice in the Northwind Traders conference room, Bob will be properly attributed in the meeting room, while Alice will appear as “Speaker 1” – even though she has set up her biometric data. This is because the biometric data from Contoso is not shared with Northwind Traders.

Another point: the voice print biometric data is only used within Teams voice-recognition scenarios and not by any other Microsoft software or service. [36]

In a pro-privacy move, user voice print data is removed if the user “…isn’t invited to any meetings with an Intelligent Speaker within that 1-year period.”[37] If a user leaves the company and their account is deleted, the data is removed within 30 days, or whatever data retention policy is in effect.[38]

Set up your digital voice profile

To set up your voice profile, open the Teams app, click the three dots (…) in the upper right, and click on Settings.

A screenshot of a computer

Description automatically generated

Once settings opens, scroll down to Recognition. If you see the message that says you are not enabled, then :sad face:. Most likely, you have not been assigned the Intelligent Speaker policy defined above, or – more likely – the policy has not yet taken effect on your account. Come back later.

image

If the policy has been assigned and applied to the user, you get the following screen instead.

image

Click on Create voice profile to get started.

image

At the top of the screen (where it says “Microphone array…”) you can select which microphone you are using for the recording. Make sure you pick the right microphone and that you are in a quiet room. Click Start voice capture and read that paragraph. It doesn’t matter if you mess up and need to read part of it again. You are not recording something for posterity. You are just letting Teams learn what your unique voice print is.

Nerd Note: You can read *anything* you want. That paragraph on the screen is just something that’s long enough to get a decent voice print. So as mentioned above – it doesn’t matter if you mess up. The point is that you speak for 15 seconds or so.

Note: You can not create or update your voice print while you are in a meeting. If you try to do this, you will receive this awkwardly worded notice:

How to use Intelligent Speaker in a meeting

Now that you have your voice print made, you just roll into a conference room and start talking and magic happens, right?

Oh, if it were so easy. You precious child. With your simplistic desires.

There is more than just setting up policies and recording your voice. You need to set up the meeting invite correctly.

Here are the requirements that must be met to meet with an Intelligent Speaker to have attributed transcription work successfully:

    1. Everyone who intends to have their voice transcribed must be listed on the meeting invite.
    2. No more than 20 people can be on the meeting invite. (Well, 19 if you include yourself, the organizer)
      • “Intelligent Speakers work best in medium-sized rooms that hold 8–10 people.”[39]
      • If more than 20 people are on the invite, Intelligent Speaker is disabled.[40]
    3. Transcription needs to be supported for the meeting. (We did that in the user voice policy, but it’s the meeting organizers policy – not yours – that determines if transcription is allowed)
    4. Someone needs to turn on transcription in their Teams client. Once the meeting starts, you can only enable transcription from the Teams client and not directly on the Teams Rooms console.
      • To enable transcription, click the three dots (More) from within the meeting. Then click Record and transcribe > and finally click Start transcription

A screenshot of a computer

Description automatically generated

At this point, you should see a transcription with your name instead of something generic as seen at the very beginning of this white paper (Figure 1: Donuts are delicious).

Below is a stolen image that was very likely a copyright violation until I probably did the greatest Photoshop edit ever to totally make it a unique work. Like Andy Warhol, this is my art.

If you look at the transcription on the right, you’ll see that it says Serena Ribeiro (Conf Room P…). This lets us know that Intelligent Speaker is working as it recognized Serena’s voice and that she is in a conference room.

A group of people in a video conference

Description automatically generated

What if someone in the conference room speaks and they don’t have a voice print (either they never set it up, weren’t on the invite, or are from a different tenant). What happens then? Anything?

Intelligent Speaker tracks all the voices in the room (well the first 10). If it recognizes a unique voice, it will tag it as Speaker 1 in the transcript. If a second new user speaks with no accessible voice print, they will be Speaker 2, etc.

See the following stolen screen shot for an example of a person being tagged as Speaker 1.

Select Identify speaker[41]

Editing mistakes

What if this whole thing makes a mistake? Or we just want to manually attribute a user in the transcript.

In the image above, you see there is a button named Identify speaker. If you click that, a drop down appears showing the names of everyone that was on the meeting invite. Pick the right name and that attribution is fixed.

Note: You can only change to a person that was on the meeting invite. This is to prevent falsely attributing something to someone who wasn’t in the room. Otherwise, nothing would stop me from attributing something to Luke Skywalker that Darth Vader said.

For more information on editing attribution on a transcript, see this document. I’m not in the mood to basically copy/paste that article into this one.

Note: You can hide your identity in meeting captions and transcripts! See this link for more info.

Troubleshooting

I’m not going to write a guide because:

  1. Review the steps above and make sure you got it right.
  2. Bugs pop up and Microsoft has a page dedicated to known issues. So please go there. (Though at the time of this writing they are still referencing an old Teams Rooms license so…..)
  3. One tip: If you see “Speaker 1” in the meeting transcript instead of the person’s name, this is a sign that this has been set up correctly, but it is not recognizing the person speaking. Make sure the user policies have been assigned to the user – which could take a minute. Or two days. Also, have the user re-record their voice in the Teams client. I have seen this fix a problem with a person not being recognized.

Summary

I hope this document helped you understand what Intelligent Speaker is and how to set it up with a Jabra PanaCast 50. It’s a cool feature but there is one warning:

Don’t expect perfection.

I’ve historically been disappointed in the accuracy of the attributed names in the transcription.  However, due to the Jabra PanaCast 50 going through Microsoft’s Technology Adoption Program (TAP), Microsoft themselves looked closely at this feature for the first time in a while. As such, the service has gotten quite a few improvements on the back end.

Oh hey – I never answered why this solution is called Intelligent Speaker. This whole feature is based around microphones that capture voices so shouldn’t it be called Intelligent Microphone? Yes, yes it should. Also, the name Intelligent Speaker presumes that the person talking is saying something intelligent. That is not always the case.

It’s called Intelligent Speaker just because. It’s what Microsoft initially called it in development – most likely because it was initially based on a speaker puck design so that name kind of stuck.

  1. Microsoft Teams Rooms and Devices: Microsoft Ignite 2023 – Microsoft Community Hub

  2. Microsoft Teams Rooms and Devices: Microsoft Ignite 2023 – Microsoft Community Hub

  3. Flexible work is here to stay: Microsoft 365 solutions for the hybrid work world | Microsoft 365 Blog

  4. The Yealink MSpeech can only connect to a Yealink Microsoft Teams Rooms. The EPOS can connect to any vendors Teams Rooms.

  5. Flexible work is here to stay: Microsoft 365 solutions for the hybrid work world | Microsoft 365 Blog

  6. I will always bring the donuts.

  7. Announcing general availability for Intelligent speakers for Microsoft Teams Rooms – Microsoft Community Hub

  8. Announcing general availability for Intelligent speakers for Microsoft Teams Rooms – Microsoft Community Hub

  9. The Yealink MVC S60 should be certified soon.

  10. Yealink MSpeech can only be used with Yealink Teams Rooms installations

  11. The Lenovo ThinkSmart Bar 180 is manufactured by Jabra. As such, this document completely applies to the Lenovo ThinkSmart Bar 180 as well

  12. Regulated Persons Definition | Law Insider

  13. Just to clear up any future confusion: I will always buy the donuts.

  14. Plus, it makes this whitepaper longer which adds to its legitimacy.

  15. It’s the second one. The Jabra one. That’s the best one! If you picked that one, go get yourself a well-deserved donut.

  16. As mentioned in the preamble, this will change at some point in 2024 where you can also use a BYOD connected laptop or desktop.

  17. I don’t have a reference for this. But I worked at Microsoft. You gonna question me on this one???

  18. Tenant Administration control for voice recognition (voice profile) in Teams Rooms – Microsoft Teams | Microsoft Learn

  19. Source: Conversation between Greg Baribault, Microsoft, and the author at Teams Rooms World, 25 October, 2023.

  20. Microsoft Teams Rooms licenses – Microsoft Teams | Microsoft Learn

  21. What is a locale? Locale – Globalization | Microsoft Learn

  22. Wait until we get into facial recognition! Speaking of which – Intelligent Speaker is *not* a technical requirement for facial recognition to work. Microsoft want to recognize people in the room when they are not an active speaker. As of today, the Teams client requires voice before facial biometrics can be recorded but I get the impression that will change in time. Source: Conversation between Greg Baribault, Microsoft, and the author at Teams Rooms World, 25 October, 2023

  23. You can download Jabra Direct from here – Jabra Direct – Engineered to optimize and personalize your headset

  24. Use Microsoft Teams administrator roles to manage Teams – Microsoft Teams | Microsoft Learn

  25. Use Microsoft Teams administrator roles to manage Teams – Microsoft Teams | Microsoft Learn

  26. Create resource accounts for rooms and shared Teams devices – Microsoft Teams | Microsoft Learn

  27. Set-CsTeamsMeetingPolicy (SkypeForBusiness) | Microsoft Learn

  28. Tenant Administration control for voice recognition (voice profile) in Teams Rooms – Microsoft Teams | Microsoft Learn

  29. Tenant Administration control for voice recognition (voice profile) in Teams Rooms – Microsoft Teams | Microsoft Learn

  30. Tenant Administration control for voice recognition (voice profile) in Teams Rooms – Microsoft Teams | Microsoft Learn

  31. What is a “cmdlet”? Cmdlet Overview – PowerShell | Microsoft Learn

  32. Tenant Administration control for voice recognition (voice profile) in Teams Rooms – Microsoft Teams | Microsoft Learn

  33. Announcing general availability for Intelligent speakers for Microsoft Teams Rooms – Microsoft Community Hub

  34. Tenant Administration control for voice recognition (voice profile) in Teams Rooms – Microsoft Teams | Microsoft Learn

  35. I have no idea why this is possible, as biometric data should be secured and you should only be allowed to delete it. But that’s just my opinion.

  36. Tenant Administration control for voice recognition (voice profile) in Teams Rooms – Microsoft Teams | Microsoft Learn

  37. Tenant Administration control for voice recognition (voice profile) in Teams Rooms – Microsoft Teams | Microsoft Learn

  38. Data retention, deletion, and destruction in Microsoft 365 – Microsoft Service Assurance | Microsoft Learn

  39. Use Microsoft Teams Intelligent Speakers to identify in-room participants in a meeting transcription – Microsoft Support

  40. Use Microsoft Teams Intelligent Speakers to identify in-room participants in a meeting transcription – Microsoft Support

  41. Use Microsoft Teams Intelligent Speakers to identify in-room participants in a meeting transcription – Microsoft Support

Enabling and Validating QoS on Teams Rooms on Android

While delivering some Teams Rooms on Android training, I was asked a question about Quality of Service (QoS) on Microsoft Teams Android devices. I knew the answer conceptually, but not practically. In other words, I wanted to see it in action. 

In this article, I will show how to set up QoS on Teams Android devices and then how to vlaidate that DSCP tags are being applied.

There are a few ways to implement QoS for Teams – via Group Policy, networking equipiment, and via Meeting settings in Teams.

This article isn’t going to go through the pros and cons of each method. I’m just going with the last option – enabling QoS via Meeting Settings in Teams.

To do this, open up Teams admin center  and expand to the Meetings section. Next, click on Meeting settings.

Meeting settings in Teams admin center

From here, scroll to the Network section. Right at the top of this section is an option named Insert Quality of Service (QoS) markers for real-time media traffic. Flip this switch from Off to On.

Enabling QoS in Teams

Note: This will enable QoS on a lot of things beyond Teams Android devices – like all your Android mobile phones. There *shouldn’t* be a problem here but be sure you ae working on this with your networking staff because there is a lot more to QoS than flipping this switch. I’m also leaving all the media ports at default where your network team may need you to change these values.

Once you have enabled QoS, wait. At some point, like magic, Teams network traffic will start getting tagged with the appropriate DSCP markers.

So how do we know it’s working?

Well in my world, you look at a Teams network packet and see if it is tagged. It is at this point in the article where I can only provide some high level guidance as every network is different and where you go to “listen” to network traffic differs wildly. If you are in any moderately sized IT department, there will be someone who can do this for you or work with you to sniff the network traffic. 

Yup, if this is all knew to you, the workd sniffing is part of the parlance.

In my simple home network, this is how I did it. 

I happen to have a firewall from Ubiquit that has a packet sniffer bnuilt in. It’s called tcpdump which is a commonly used tool to sniff traffic. I won’t explain other ways to do this, but mirrioring a network port or using a dumb hub works too.

Now the most direct way to do this is to remote to the firewall using secure shell (ssh), run tcpdump, capture the traffic, copy it to my PC, and finally look at the file with WireShark.

And that way definitely would work. But it’s a lot of signing in and copying files and manually opening files that was just a bit much for me. Doing a little Bing-fu, I came across a post on the Ubiquiti support forums that allowed me to stream the tcpdump captures directly into WireShark on my Windows PC. 

This command may or may not work for you if you want to do similar packet sniffing. 

plink -batch -l <firewall username> -pw <firewall password> <firewall IP address> sudo /usr/sbin/tcpdump -i <firewall interface, e.g. eth1> -w - host <IP address of Android device> | "c:\program files\WireShark\Wireshark.exe" -k -i -

First up….what is a “plink”? It was new to me too. Plink stands for “PuTTY link” which is a command line interface for PuTTY. It basically signs into my firewall, starts tcpdump, and then redirects the output from tcpdump to a data stream that gets piped to WireShark, where the real magic happens.

The loose ‘-‘ in that command line are required. 

Note: Run this command the first time to cache the secure key from the firewall:
plink -l <firewall username> -pw <firewall password> <firewall IP address>

It’s also a useful test to make sure you can get connected.

Within WireShark, I used a simple display filter to filter on rtcp packets.

Apply rtcp display filter

The first few rtcp packets from your Android bar may not show the QoS information, but if you scroll down a bit, you will start seeing tagged packets, Also be sure that you are clicking on entries where the source is your Android device.

What are you looking for? How do you know QoS was tagged? Good question!

Here is a packet that shows QoS was applied. I’ll break this down to help you understand what you are seeing.

Full display of a captured rtcp packet.

Hopefully you figured out the top half shows all of the packets, and you can apply a display filter. After clicking on a packet, the bottom half of the screen shows the details. I’ve expanded the two most relevant sections.

Before we get any further, let me paste in the default settings that Teams recommends.

Recommended Port Ranges and DSCP Values

Now let’s break down the packet capture and see what we have. Starting from the top of the Internet Protocol (IP) section, we can see there are DSCP values. 

For DSCP, we see something called “AF41”. If we look at the table above, we see that AF41 is applied to video. Cool. Can we further verify that this is a video stream?

Yes, yes we can.

 Look down into the User Datagram Protocol (UDP) section and look at the Source Port value.

This tells us that the rtcp packet was sent from the Android device on UDP port 50031. Look up at the chart above, and you can see that media sourced from port 50031 is indeed video traffic as the video source port range is 50020 – 50039.

We now know that QoS is applied to our packets and that they are being applied correctly.

Enterprise Voice in Skype for Business Server 2015 book. Free!

Early image of the book cover.

Early cover. Note slightly different book name.

In 2016, I released the book Enterprise Voice in Skype for Business Server 2015. It is the definitive tome on connecting Skype for Business with the phone system. As time has passed, purchases of the book have basically fallen off the cliff.

I’m now giving the book away for free. Why would you want to download this?

1.)  It’s free so duh

2.) Much of what is in here is what was ported to Teams. If you are unsure of calling plans and other things like that, this book will help explain

3.) The appendix on Regular Expressions is probably the most useful chapter nowadays. The chapter gives plenty of examples for successfully creating Regular Expressions specific to calling.

If you want to read the back story of why and how I created the book, you can read that here

Download here

How to join a Microsoft Teams meeting with a browser

Videos are best viewed in full screen mode. Click the square in the bottom right of the video.

Using Google Chrome on Windows 10.

If you are asked for a sign in, start Chrome in Incognito mode.


Using Chrome on Mac OS Catalina. Do not use Safari.

UC Now v 9 Released

Not a ton has changed to warrant a full version update, but there is a major feature edition. (Also, I was at version 8.8 so at some point I have to make the jump, right?)

The big addition is the ability for you to remove modules that you’re not interested in. Prior to this release, you got the whole monolithic list of modules and you were stuck with it. Now, by going to the Explore module you can pick which modules you want and which you don’t.

You can find the Explore option by going to the menu and scrolling all the way to the bottom. From there, tap on the Explore option.

 

This brings up a list of all of the modules in the app. form here simply tap on the modules you no longer want to see in the main feed.

Once you’ve made your changes, click the back arrow on your phone and the menu will be updated. If you compare the first image with the one below, you’ll see that a bunch of modules (such as Windows Weekly) have been removed from the feed.

For those unfamiliar with the app, it’s a central point for things related to Exchange, Skype for Business, Microsoft Teams, and Office 365. In one place you can see the latest blog posts that have been published as well quick access to TechNet, Tech Community, YouTube Videos, Twitter feeds, etc.

Here is a summary of the changes made to the Android app:

  • Added ability to remove modules
  • Added Coffee in the Cloud
  • Fixed feeds for Microsoft Mechanics and Microsoft Unboxed
  • Added links to the Office 365 Driving Adoption website
  • Removed link to FastTrack Myadvisor
  • Shout out to those who provide feedback. In this particular update, thanks to Karuana Gatimu for additional site suggestions

The Android App can be found at this link.

 


 

*The poorly updating feed may be due to app caching. To try to fix this, go to Settings on your phone, then Apps & Notifications. Click on “>See all apps” and scroll down to UC Now. Click on the Storage option and then click on both “Clear cache” and “Clear Storage”.

UC Now v 8.8 Released

Every so often, the spirit moves me and I take the time to update the Android app that I have out there. And about 3 months ago, the spirit moved me. But I ran into a technical problem (brain malfunction) and I never was able to publish a version that didn’t immediately crash after launching.

A comment by bueschi got me to revisit this issue and as of this morning it’s fixed.

For those unfamiliar with the app, it’s a central point for things related to Exchange, Skype for Business, Microsoft Teams, and Office 365. In one place you can see the latest blog posts that have been published but also quick access to TechNet, Tech Community, YouTube Videos, Twitter feeds, etc.

Here is a summary of the changes made to the Android app:

  • Added a dedicated podcast section
  • Added All About 365 podcast
  • Added Collab365 podcast
  • Added Microsoft Cloud Show podcast
  • Added Microsoft Mechanics YouTube playlist
  • Added Microsoft Unboxed YouTube playlist
  • Added MS Cloud IT podcast
  • Added O365Eh! podcast
  • Added SQL Server Radio podcast
  • Added the Intrazone podcast
  • Added Windows Weekly podcast
  • Added Office365 Status Twitter link
  • Latest version of app framework so ostensibly bug fixes and performance enhancements

The Android App can be found at this link.

Here is a summary of the changes made to the Windows app:

  • Added Tech Community links
  • Added Skype Operations Framework link
  • Added YouTube video pages
  • Added Thought stuff video blog
  • Added Three 65.live link
  • Added Tech Community links for Office 365 and Teams
  • Updated link for Skype Dialing Optimizer
  • Updated link for UC Architects
  • Reorganized everything into groups to more quickly get to the technology you’re interested in
  • Latest version of app framework so ostensibly bug fixes and performance enhancements

 

For those interested in ios/Windows/Linux, there is not and there will not be a version of this app for those devices. Instead, please visit UC-Now.com.

 

Fireball Whisky Ice Cream

There’s no point in making “regular” ice cream. The ingredients to make ice cream at home are fairly pricey and the quality ice cream you can buy at the grocer is about as good as you can make at home.

So I pretty much only make boozy ice cream that you cannot buy.

For reasons I forgot, I was challenged to make Fireball Whisky ice cream.

Challenge accepted!

Below is the recipe I used. I took a basic whisky custard recipe and adjusted it to my needs.

I grated a cinnamon stick in the blender to add to the cinnamon punch of the ice cream. No clue if it was 2 teaspoons. Probably less. But you can always add more in after the cream starts churning in the ice cream maker to suit your needs. I know I added more than 1/3 cup Fireball because the ice cream is a little soft. I’m thinking that’s due to the alcohol preventing a full freeze. The original recipe called for 1/4 cup. But that’s not enough. So boost it to 1/3 cup but don’t go much higher like I did.

Fireball Whisky Ice Cream
2 cups whole milk
2 cups heavy cream
1 1/4 cups granulated sugar
8 large egg yolks
1/3 cup Fireball Whisky
1/2 vanilla bean
1 teaspoon fresh ground cinnamon + extra to taste.

Take a cinnamon stick or two and pulverize it in a blender/coffee grinder/food processor.

In a 4-quart saucepan, combine milk, cream, and roughly half the sugar. Set over high heat, and cook, stirring occasionally and not over-zealously, until the mixture comes to a boil, about 5 minutes.

Meanwhile, in a medium bowl, whisk egg yolks and remaining sugar until smooth, heavy, and pale yellow, about 30 seconds.

When cream mixture just comes to a boil, whisk, remove from heat, and, in a slow stream, pour half of it over the yolk-sugar mixture, whisking constantly until blended. (This is called tempering the eggs, so that they do not scramble.)

Return the pan to stovetop over low heat. Whisking constantly, stream yolk-cream mixture back into pan.

With a wooden spoon, continue stirring until the mixture registers 165 to 180 degrees on an instant-read thermometer, about 2 minutes. Do not heat above 180 degrees or the eggs will scramble. The mixture should be slightly thickened and coat the back of spoon, with steam rising, but not boiling.  Stir in the whisky.

With a sharp knife, cut the vanilla bean in half lengthwise then scrape out the tiny seeds on the inside. Add the vanilla bean and the seeds to the cream mixture.

Add the cinnamon to the cream mixture.

Remove from heat and strain mixture through a fine mesh sieve into a bowl.

Taste. If it needs more cinnamon….Whisk in the cinnamon then set aside to cool. When mixture has cooled a bit, cover and refrigerate for several hours until well chilled. Press plastic wrap over the surface to prevent a skin from forming.

When the cream has fully cooled (at least three hours in the refrigerator), pour it into the ice cream maker and follow the vendor instructions from there.

 

UC Now Apps Updated

This is what the Windows 10 Mobile version looks like, for those curious if that is still a thing. Yes, it’s still a thing.

Every so often, the spirit moves me and I take the time to update the Windows 10 and Android apps that I have out there. And this weekend, the spirit moved me.

For those unfamiliar with the apps, they are a central point for things related to Exchange and Skype for Business. So in one place you can see the latest blog posts that have been published but also quick access to TechNet, Tech Community, YouTube Videos, Twitter feeds, etc.

In this version there is also a little bit added for Office365 and Microsoft Teams. It’s not much. If you have resources you’d like added, please let me know in the comments.

Here is a summary of the changes made to the Android app:

  • Added Tech Community links
  • Added Skype Operations Framework link
  • Added YouTube video pages
  • Added Thought stuff video blog
  • Added Three 65.live link
  • Added Tech Community links for Office 365 and Teams
  • Fixed Link to Test Connectivity website
  • Updated link for Skype Dialing Optimizer
  • Updated link for UC Architects
  • Removed Lync Certified Devices module
  • Removed Lync Press module
  • Latest version of app framework so ostensibly bug fixes and performance enhancements

The Android App can be found at this link.

Here is a summary of the changes made to the Windows app:

  • Added Tech Community links
  • Added Skype Operations Framework link
  • Added YouTube video pages
  • Added Thought stuff video blog
  • Added Three 65.live link
  • Added Tech Community links for Office 365 and Teams
  • Updated link for Skype Dialing Optimizer
  • Updated link for UC Architects
  • Reorganized everything into groups to more quickly get to the technology you’re interested in
  • Latest version of app framework so ostensibly bug fixes and performance enhancements

The Windows 10 updates can be found at this link.


For those interested in Apple devices, there is not and there will not be a version of this app for those devices. Instead, please visit UC-Now.com.

Presence Lying, Like a Pro

One of the great things about Skype for Business is you can see if a remote person is available for a chat or if they are busy in a meeting. Taking advantage of the presence capabilities is a very useful tool.

However, some people try to game the system and artificially set their presence. They may set their presence to “Away” when they are actually available just because they don’t want to be bothered by anyone. Yet those same people are more than happy to send you an IM while they are “Away”.

You've been Away 18 hours yet still sent me an IM?

You’ve been Away 18 hours yet still sent me an IM?

 

I’m no fan of being a Presence Liar, but there are some decent reasons to do it.

Some people schedule “meetings” so that they can have time to get actual work done.

But what if you work remotely and, you know, screw around a bunch. You don’t want your boss to look at your status all the time and see “Away” because you aren’t using your PC much while napping.

How can you game the system?

By default, your status changes from Available to Away after 5 minutes of not using your mouse or PC. Having to touch your PC once every 5 minutes is totally preventing me from napping during the workday.

One solution is to download a little bit of software to take this pain away from you. Caffeine is one option (I haven’t tried it). Apparently this utility simulates pressing the F15 (not a typo) key every 59 seconds. This one fakes mouse movements every so often.

So that’s an easy and free fix. But what if you work in an environment where you can’t install any software on your computer? Or what if your company scans your PC for software like this and you end up on a report and a call form your boss?

Is there a stealthier way to stay “available” while napping?

Yes, yes there is.

Welcome to the world of Mouse Jiggler.

jiggler

This little bad boy is the answer to all your napping-during-work-while-still-being-“available” dreams. As far as Windows is concerned, you just plugged in a generic HID-compliant mouse. As far as you are concerned, you are now moving the mouse every so often – just enough to keep the screen awake and your status as Available.

So how does Windows see this? Generic Mouseville, Population one.

I ran a Belarc Advisor report against my PC and all it did was report this:

HID-compliant mouse (4x)

I actually have 2 real mice connected plus the Mouse Jiggler. Not sure what that fourth one is!

I also downloaded some random USB reporting tool and I found the Mouse Jiggler in my list of USB devices.

usb

 

So how much and how often does the mouse get moved?

Excellent question.

I used a mouse recording utility, set it to start, then left the room to watch some football. I came back and saw the below output.

mouse-recorder

 

The very first entry is me clicking “record” with my mouse so we can ignore that entry. Thereafter, everything is being input by the Mouse Jiggler.

The second column is the X-Axis, the third column the Y-Axis, and the 4th column is elapsed milliseconds since last movement.

The first action happened after almost 6 minutes. The Jiggler moved the mouse 1 pixel. Three minutes later, the mouse was moved 7 pixels. And then after about the same delay, the mouse was moved 8 pixels.

This is enough movement to keep my Skype for Business client listed as Available.

It also keeps the screensaver off. This is useful if you work somewhere that sets the screensaver lockout duration to 1 minute.

According to the Mouse Jiggler website, there are plenty of reasons other then napping while “available”:

Presenters use Mouse Jiggler because it allows them to present without the screensaver popping up in the middle of the presentation. Employees who are unable to change their system sleep settings or install unapproved software on their computers find Mouse Jiggler convenient to keep screen savers or login screens from activating.

IT professionals use the Mouse Jiggler to prevent password dialog boxes due to screensavers or sleep mode after an employee is terminated and they need to maintain access to their computer.

Computer forensic investigators use Mouse Jigglers to prevent password dialog boxes from appearing due to screensavers or sleep mode. With many computer hard drives now employing full-disk encryption, such modes can greatly increase the time and cost of a forensic investigation.

 

I don’t care about those. I just like to know that I’m always Available, even while napping.

And if my boss is reading this, I am only writing this as an overview of a device related to my expertise, not because I’m nappinzzzzzzzzzzzzzzzzzzzzzz…..

 

My first publishing experience. Windows NT 4.0 anyone?

6 27 16 3 29 PM Office LensThe book I recently released – Enterprise Voice in Skype for Business Server 2015 – was not my first publishing credit. No, to find that, I need to take you back to 1997 and the book Windows NT Troubleshooting and Configuration.

I was born and raised in Indianapolis, Indiana. Unbeknownst to many, there is a bit of an industry in publishing tech manuals in that city. You know those “…for Dummies” books? That franchise is at least partially run out of Indianapolis.

The publisher of this particular book was Sams Publishing, also based out of Indianapolis. Because of this, as you start networking throughout the Indianapolis tech community, it won’t take long until you run into a bunch of people who have written at least a chapter or two in a published book.

And it was through this that I was given the opportunity to write two chapters in this book:

 

  • Chapter 30: Windows NT and Dynamic Host Configuration Protocol
  • Chapter 34: Integrating Windows NT and UNIX.

Back in the day, I was a bad-ass at Windows NT. That’s not just some boasting. I’ll challenge my 27 year old self against anyone on Windows NT. I lived it. It was how I was making my living…building and supporting Windows NT networks. This was mostly for small businesses in the Indianapolis area but I had one account that had about 1,500 people in a dozen or so locations. There were two global companies where we supported one of the divisions in Indianapolis.

So when the opportunity came up to write a chapter or two, I jumped at it.

But as time passed, I lost my copy of the book. I gave one to my mom but when I went to visit her a few weeks ago, the book was gone. And in the mists of time, I forgot the name of the book too! I just remembered it had a mostly green cover.

Thanks to the Internet and an hour of my life, I saw a book that looked really familiar. My name wasn’t in any of the online sites where I found it, but I had a feeling this book was the one. So I bought a used copy for ~$6USD including shipping. It showed up today.

And there is my name!

6 27 16 3 30 PM Office Lens6 27 16 3 31 PM Office LensBelow are pictures of some of the pages I wrote. I remember doing a lot of work on this book, setting up a lab, etc. This was before VM’s were a thing so I had to have at least 2 PC’s. I forget the exact configuration. I’m old and this is trivia that is apparently no longer relevant to me. Or probably to you.

So here you go, a few snippets from the long lost book which was my first foray in the publishing world.

 

6 27 16 3 32 PM Office Lens6 27 16 3 36 PM Office Lens

6 27 16 3 38 PM Office Lens6 27 16 3 38 PM Office Lens 16 27 16 3 39 PM Office Lens