/
Nick Smith Principal Program Manager Nick Smith Principal Program Manager

Nick Smith Principal Program Manager - PowerPoint Presentation

karlyn-bohler
karlyn-bohler . @karlyn-bohler
Follow
355 views
Uploaded On 2018-09-22

Nick Smith Principal Program Manager - PPT Presentation

Microsoft Corporation Getting the Most out of Lync Server Monitoring Service Data SERV302R Can you fix it before you diagnose it Session Objectives Describe the fundamentals of the Monitoring service ID: 675981

data call monitoring server call data server monitoring reports 100 report session quality diagnostic audio service 2010 review qoe scenario media endpoints

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Nick Smith Principal Program Manager" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1
Slide2

Nick SmithPrincipal Program ManagerMicrosoft Corporation

Getting the Most out of Lync Server Monitoring Service Data

SERV302-RSlide3

Can you fix it before you diagnose it?Slide4

Session Objectives Describe the

fundamentals of the Monitoring serviceWhat intelligence can you gather from the Monitoring

server data

How can you use this data to answer deployment questions

How can you use the data to assist in troubleshooting your environment

What other tools can you leverage beyond the built-in reports

Takeaways

Why is the Monitoring service a required component in every Lync DeploymentLearn how to interpret the data in the monitoring server reportsLearn how to turn the monitoring server into a powerful troubleshooting tool

Session Objectives

And

TakeawaysSlide5

Monitoring Server FoundationsSlide6

Monitoring Service Components

ClientsAll endpoints collect and report call quality dataCall quality is reported based on data shared between endpoints during the call

Server

The server collects the call quality reports and call detail reports and submits into the monitoring databases

Databases

LcsCDR

– Call Detail Records

QoEMetrics – Quality of Experience Data

Call LegSlide7

How do Call Detail Reports get Collected?

Front end servers relay all SIP traffic

The agent evaluates SIP messages for:

Registrations

Response codes

Diagnostic

codes

Meeting events

This data is then sent to the

LcsCDR

database for collection

To view this process enable Component logging of the

UdcAgent

componentSlide8

How is Call Quality data Collected?

All media endpoints collect Quality of Experience (

QoE

) data

At the end of the call, each endpoint reports its

QoE

data

Call stream data is reported into the QoEMetrics database

Look for ‘

VQReport

’ in SIP logsSlide9

QoEMetrics and LcsCDR Databases

QoEMetrics

and

LcsCDR

database schemas are documented on TechNet

Needed to write custom SQL queries

This is a good reference for understanding metrics and associated thresholds

Most in-box reports are based on pre-defined SQL

Views

Key

QoE

Views

AudioStreamDetail

MediaLine

QoEReportsCallDetail

Session

VideoStreamDetail

Key CDR Views

Conferences

ConferenceSessionDetails

Registration

SessionDetails

VoIPDetailsSlide10

Understanding the DataSlide11

Characteristics of a Poor Audio Call

RatioConcealedSamplesAvg > 7% (0.07)

The ratio of auto-generated audio data over real speech data, i.e. audio data is delayed or missed, due to network connectivity issues.

PacketLossRate > 10% (0.1)

Average packet loss rate during the call

JitterInterArrival > 30ms

Maximum network jitter during the call

RoundTrip > 500ms

Round trip time from RTCP statistics

DegradationAvg > 1.0

The amount the Network MOS was reduced because of jitter and packet lossSlide12

What can I get from the ‘in-box’ reports?Slide13

Diagnostic logs

“Thanks for reporting your issue – do you have logs with that

?”

Both diagnostic codes and media quality are collected and reported

Correlate the user’s reported issue with the associated session reported via CDR/

QoE

Provides an objective view of the experienceSlide14

Trend analysis

Understand how to determine the scope of the issueWho else is impacted?How often does this happen?

Why is it happening?Slide15

Proactive Operations

Understanding Deployment Health

Reviewing reports should be

part

of our daily operations

Gives you a snapshot of usage, quality trending, worst performing servers, and top

failures

SCOM management pack can monitor the QoE database to identify locations preforming badlySlide16

Scenario review

Report - Peer to peer audio call reportSlide17

Scenario review

Report - Conference call reportSlide18

Life outside the ‘in-box’ reportsSlide19

The Lync Call Quality Methodology

Voice Quality FrameworkUses Network Telemetry in

QoE

Prescriptive approach for measured improvements

Service Management end stateSlide20

Health Analysis Tool

Allows you to focus on the most frequent diagnostic codes affecting service reliabilityDownload as part of the RASK http://aka.ms/LyncRASK

 

 

Session Success Rate

 

P2P Session

Number of Sessions

Current Week

Previous Week

Weekly Change

Application Sharing

214

100.00%

99.91%

0.09%

Audio

2012

100.00%

99.98%

0.02%

File Transfer

30

100.00%

100.00%

0.00%

IM

6017

100.00%

100.00%

0.00%

Video

194

100.00%

100.00%

0.00%

 

 

Session Success Rate

 

Conferencing Sessions

Number of Sessions

Current Week

Previous Week

Weekly Change

conf:applicationsharing

618

100.00%

99.76%

0.24%

conf:audio-video

1598

100.00%

97.80%

2.20%

conf:chat

1035

100.00%

99.91%

0.09%

conf:focus

2092

100.00%

99.29%

0.71%

Week Of

Response Code

Diagnostic Id

Reason String

Diagnostic Count

Media Volume

Diagnostic Rate

9/19/2010

200

22

Call failed to establish due to a media connectivity failure when both endpoints are internal

9

4590

0.20%

9/12/2010

200

22

Call failed to establish due to a media connectivity failure when both endpoints are internal

7

3919

0.18%

9/5/2010

408

3033

The C3P transaction timed-out

2

3147

0.06%

9/5/2010

504

1038

Failed to connect to a peer server

2

3147

0.06%

9/19/2010

200

21018

Server internal error in ASMCU

2

4590

0.04%

9/5/2010

200

21018

Server internal error in ASMCU

1

3147

0.03%

9/12/2010

200

21018

Server internal error in ASMCU

1

3919

0.03%Slide21

Adoption Dashboard

Designed to enhance the tracking of your adoption rates

Complements the ‘in-box’ usage reports

Provides:

Registered Users

Active Users

Usage

Charts by modality

P2P Sessions

Conferences

Dial-in

Conferences

Total Session

Minutes

Will be released

publically soon via the RASK Slide22

Session objectivesDescribe the fundamentals of the Monitoring service

What intelligence can you gather from the Monitoring server dataHow can you use this data to answer deployment questionsHow can you use the data to assist in troubleshooting your environment

What other tools can you leverage beyond the built-in reports

Takeaways

Why is the Monitoring service a required component in every Lync Deployment

Learn how to interpret the data in the monitoring server reports

Learn how to turn the monitoring server into a powerful troubleshooting tool

In Review: Session Objectives And TakeawaysSlide23

AppendixSlide24

Common scenariosSlide25

Common Questions

From Users

What was wrong with this call?

Why did I see a User Facing Diagnostic (UFD)?

I was just on a call and it dropped mid stream. Why?

From Administrators

How do I read the in-box reports?

How can I determine what the value of X in a particular report means?Where do I start?Slide26

Scenario review

Overview - Where do I startSlide27

Scenario review

Report - Peer to peer audio call reportSlide28

Scenario review

Report - Conference call reportSlide29

Scenario review

Report - Top failures