Home/ data-analytics/ Power BI Data Analyst/ Cheat Sheet
Power BI Data Analyst

Power BI Data Analyst Cheat Sheet

PL-300 Tests Data Modeling and DAX Logic — Not Just Report Building

Building visuals in Power BI is table stakes. The exam tests whether your data model is correct, your DAX is accurate, and your reports serve business decisions.

Check Your Readiness →
Among the harder certs
Avg: Approximately 63–68%
Pass: 750 / 1000
Most candidates understand Power BI Data Analyst concepts — and still fail. This exam tests how you apply knowledge under pressure.

PL-300 Power BI Solution Framework

PL-300 tests the full Power BI workflow. Data modeling and DAX are the most heavily tested areas and the most common failure points. Understand filter context vs. row context before attempting DAX questions.

  1. 01
    Data Preparation — Power Query: transform, clean, shape data from multiple sources
  2. 02
    Data Modeling — Star schema design, relationships, cardinality, cross-filter direction
  3. 03
    DAX — Measures vs. calculated columns, context (row vs. filter), time intelligence
  4. 04
    Report Design — Appropriate visual selection, interaction design, accessibility
  5. 05
    Service & Deployment — Workspaces, row-level security, refresh, sharing and distribution

Wrong instinct vs correct approach

A measure returns different values depending on which slicer is used
✕ Wrong instinct

Check the DAX formula syntax for errors

✓ Correct approach

Investigate filter context — the measure is likely being evaluated in different filter contexts created by the slicers; use CALCULATE with explicit filters or ALLSELECTED to control which context the measure responds to

A report performs slowly with large datasets
✕ Wrong instinct

Reduce the number of visuals on the page

✓ Correct approach

Optimize the data model: remove unused columns, use integer keys instead of string keys, replace calculated columns with measures where possible, and check for high-cardinality columns that degrade compression

Users in different regions should only see their regional data
✕ Wrong instinct

Create separate reports for each region

✓ Correct approach

Implement dynamic row-level security using the USERNAME() or USERPRINCIPALNAME() DAX function — one report with RLS is more maintainable than multiple region-specific reports

Know these cold

  • Star schema always — act tables + dimension tables, not wide flat tables
  • Measures for aggregations (filter context); calculated columns for row-level attributes (row context)
  • CALCULATE is the most important DAX function — it modifies filter context
  • Single-direction relationships by default; bidirectional only when explicitly needed
  • RLS must be defined and tested before deployment — don't treat it as optional
  • DirectQuery for real-time data; Import mode for performance — choose based on refresh requirements
  • Power Query M transformations happen before modeling; DAX transformations happen at query time

Can you answer these without checking your notes?

In this scenario: "A measure returns different values depending on which slicer is used" — what should you do first?
Investigate filter context — the measure is likely being evaluated in different filter contexts created by the slicers; use CALCULATE with explicit filters or ALLSELECTED to control which context the measure responds to
In this scenario: "A report performs slowly with large datasets" — what should you do first?
Optimize the data model: remove unused columns, use integer keys instead of string keys, replace calculated columns with measures where possible, and check for high-cardinality columns that degrade compression
In this scenario: "Users in different regions should only see their regional data" — what should you do first?
Implement dynamic row-level security using the USERNAME() or USERPRINCIPALNAME() DAX function — one report with RLS is more maintainable than multiple region-specific reports

Common Exam Mistakes — What candidates get wrong

Building wide tables instead of star schema models

Importing all data into a single flat table creates performance problems and limits DAX flexibility. A star schema (fact tables + dimension tables) is the correct approach for any model of meaningful complexity. Candidates who avoid relationship-based modeling fail model design questions.

Confusing calculated columns with measures

Calculated columns compute at data refresh time and consume memory (row context). Measures compute at query time in filter context and are more efficient for aggregations. Using calculated columns for aggregations is a performance and correctness error.

Misunderstanding DAX filter context

CALCULATE modifies the filter context. ALL removes filters. FILTER adds row-level conditions. Candidates who don't understand how these functions interact produce incorrect DAX that returns wrong values in different slicing scenarios.

Ignoring bidirectional relationship risks

Bidirectional cross-filter can cause ambiguous filter paths in complex models. The default single-direction relationship is safer. Candidates who apply bidirectional relationships without considering the data model structure introduce data accuracy issues.

Publishing reports without configuring row-level security

Row-level security (RLS) restricts which data each user sees. Static and dynamic RLS roles must be defined before deployment. Candidates who skip RLS configuration fail security and compliance deployment questions.

PL-300 rewards model design and DAX precision. Test whether your Power BI skills go beyond building visuals.