• Best of luck to the class of 2024 for their HSC exams. You got this!
    Let us know your thoughts on the HSC exams here
  • YOU can help the next generation of students in the community!
    Share your trial papers and notes on our Notes & Resources page
MedVision ad

***The ATAR (Part 1): Unlock the Weird Inner-Workings of ATAR Calculations*** (1 Viewer)

hschelper01

Active Member
Joined
Apr 6, 2020
Messages
168
Gender
Male
HSC
2019
Think you’re too busy with mid-year exams and preparation for the HSC to bother reading about ATARs, ranks and the conspiracy that is scaling?

GUESS AGAIN.

Actually ‘getting’ how the HSC works and how the ATAR is calculated is going to be your key to nailing this year.

It’s going to help you realise exactly how important your exams and assessments are, what your exact goals should be and what to do after these exams.

There are about BILLIONS* of different articles on how ATARs are calculated and it’s just as easy to get super confused as it is to get caught up with elaborate strategies on how to “win the HSC game”. The main thing to remember is that none of these strategies are more important than studying smart to get your best results but, you can use them to make smart decisions about your study.

This is a big one to tackle so we’re going to break it up into two parts for you, starting with: exactly what you need to know about ATAR calculations.

*This may be an exaggeration

ATARs:
If you’ve ever looked up exactly what an ATAR is, chances are that you’ve come across something like this: “The ATAR is a rank based on an aggregate of scaled marks”.

:rolleyes::rolleyes::rolleyes:

Okay… Well, if you aren’t a fan of complicated equations or confusing words, here are the three most important things to know about your ATAR.



---- Your ATAR is a rank ----
This means that any marks you’re getting back from assessments aren’t an indication of what your ATAR will be. It’s all about how well you are doing in comparison to every other student in the state so, for example, an ATAR of 90 means that you’re in the top 10% of students in the state. Easy right?



Your final HSC mark is the average of your exam mark and assessment mark.
You’re going to get a Final HSC Mark for each subject and these marks are the basis of your ATAR - the good news is that working them out is actually an easy average of two numbers.

Take the mark your HSC examination mark, add it to your school assessment mark from the rest of year 12, divide by two and boom → that’s your final mark.

It’s seriously that easy.

If you have an HSC assessment mark of 85 in General Maths and then get an examination mark of 91 based on your final HSC exam, your subject mark for HSC maths will be 88.

Exam Mark: 91/100. Assessment Mark: 85/100. Final HSC Mark: 88.



Your ATAR comes from the final HSC mark for each subject.
Unfortunately, it’s not quite as simple as just taking an average of all your subject marks because of a little gem called: scaling. 🤷‍♂️🤷‍♂️😂

Every year 12 student hears a lot about scaling, but it’s still a tough one to wrap your head around. Basically, once you’ve locked down your final HSC mark for each subject, the wizards at UAC start combining those marks in a way that recognises it’s harder to do well in some subjects than it is in others.

Here’s a sneak peak behind the scenes:
  • You’re given a UAC score out of 50 for each one of your units and that basically rates what subject it is, and how well you did. Example: A 90 in physics might give you a UAC score of 46 but a 90 in senior science might give you a UAC score of 42.
  • The UAC scores for each subject are added together to give an “aggregate” or overall total out of 500. (Remember, only 10 units count and each of them was given a UAC score out of 50).
  • Now, every student starts to get allocated an ATAR based on where their “aggregate” ranks in the state- so the highest 0.05% of all the total marks will be given a 99.95 ATAR and the top 10% of total marks in the state mean those students will all have an ATAR of 90 and above.
What you really need to know is there aren’t any “bad” or “good” subjects - it’s just a system to compare marks from completely different subjects. There's nothing wrong with doing lower-scaling subjects, you just have to really focus on getting the best marks you possibly can so you're less likely to have results scaled down.

---- Ranks ----
So it all comes down to two marks for each subject: your HSC exam mark and your assessment mark from your school.

It’s not too hard to figure out where the HSC exam mark comes from - it’s the score out of 100 that the Board of Studies markers give you for your HSC exam (just remember that it's not the raw mark, it's an aligned mark that you are given).

It starts to get a bit dicey when you try to figure out where that HSC assessment mark came from, especially because all the schools in NSW have pretty different standards of assessment. The only chance to really compare schools happens during that HSC exam which is the same for every student. So to cut to the chase, your actual assessment marks mean nothing and instead, your school ranking is all-important.

Instead of using the exact assessment mark, NESA actually assigns you a mark based on your school rank and the school’s results in each subject. This part can be tricky to explain so let’s look at an example of the 48 economics students in one school - Scots.
  • The school is going to rank each student from 1 - 48 based on how well they do in their assessments, mid-years, trials etc. and then send that over to NESA.
  • All the students will sit the same HSC economics exam like every other kid in the state and earn an exam mark.
So, let’s say the top mark from Scots students in the HSC exam was a 96, the average was 84 and the lowest mark was a 72.

Maxx was ranked #1 at Scots and scored a 92 in the HSC exam. Now, that #1 rank is worth the highest exam mark from all Scots students and his assessment mark is going to be a 96. Here are her final results:

Exam mark: 92. Assessment mark: 96. HSC mark: 94.

And it works like that for every rank. See, Toby was ranked #25 and scored 85 in the HSC exam. That means his mid-range rank is going to be worth about the average exam mark of the school, maybe an 83. His results are:

Exam mark: 85. Assessment mark: 83. HSC mark: 84.

And, you guessed it, whoever - let’s call him Thomas - was ranked #48 at Scots economics is going to have an assessment mark of the lowest exam mark: 72.

The bottom line is that your assessment mark is always going to be taken from your school’s range of scores in the HSC exam so just remember, the actual marks you’re getting in your assessments don’t mean anything, it’s all about that rank.

Cool story but how do I actually get a better ATAR?
That’s the real question!

There’s no point wasting time by learning how HSC marks are calculated if you can’t use it to supersize your ATAR without a ton of extra work. Also, with everything going on right now (thanks COVID), I've got a feeling scaling will be VERY different this year (and years to come).

I've given you a lot of information for now and there are still those pesky assessments to study for so and complete.

Follow me to stay up to date with my latest posts AND keep an eye out for a new post over the next few days on PART 2 to THIS POST!

Good Luck!!
 

Trebla

Administrator
Administrator
Joined
Feb 16, 2005
Messages
8,384
Gender
Male
HSC
2006
Just to clarify a technicality, scaling has nothing to do with how difficult a subject is. It is actually based on the strength of the subject's cohort in all their other subjects.

In layman's terms the reason that say Physics typically scales better than Biology is because there are usually 'smarter' people (when looking at their performance in other subjects) doing Physics than Biology. It is also possible to have one year where there are more 'smarter' people in Biology than Physics, in which case Biology would scale better than Physics for that year.

In fact, the scaling algorithm makes it possible for the following extreme scenario. Imagine all the high performing students only do Mathematics Standard and the low performing students do Mathematics Advanced. This scenario would cause Mathematics Standard to be scaled better than Mathematics Advanced, despite it being the "easier" subject. Of course, the reality is that more high performing students choose to do Mathematics Advanced than Mathematics Standard.
 

c8

Active Member
Joined
Apr 28, 2018
Messages
216
Gender
Female
HSC
2020
was ranked #48 at Scots economics is going to have an assessment mark of the lowest exam mark: 72.
So whoever is ranked last internally will automatically receive the lowest HSC examination mark, regardless of their performance in the HSC? Or is this only the case for whoever is dead last, then everyone in-between (not including whoever is first) is aligned a mark according to the cohorts performance? Not sure if what I just said makes sense...I'm just very perplexed by the calculation of marks
 

hschelper01

Active Member
Joined
Apr 6, 2020
Messages
168
Gender
Male
HSC
2019
Just to clarify a technicality, scaling has nothing to do with how difficult a subject is. It is actually based on the strength of the subject's cohort in all their other subjects.

In layman's terms the reason that say Physics typically scales better than Biology is because there are usually 'smarter' people (when looking at their performance in other subjects) doing Physics than Biology. It is also possible to have one year where there are more 'smarter' people in Biology than Physics, in which case Biology would scale better than Physics for that year.

In fact, the scaling algorithm makes it possible for the following extreme scenario. Imagine all the high performing students only do Mathematics Standard and the low performing students do Mathematics Advanced. This scenario would cause Mathematics Standard to be scaled better than Mathematics Advanced, despite it being the "easier" subject. Of course, the reality is that more high performing students choose to do Mathematics Advanced than Mathematics Standard.
Oh that's a good point actually ^^
 

Potato Sticks

Member
Joined
Oct 25, 2019
Messages
37
Gender
Undisclosed
HSC
2013
Just to clarify a technicality, scaling has nothing to do with how difficult a subject is. It is actually based on the strength of the subject's cohort in all their other subjects.

In layman's terms the reason that say Physics typically scales better than Biology is because there are usually 'smarter' people (when looking at their performance in other subjects) doing Physics than Biology. It is also possible to have one year where there are more 'smarter' people in Biology than Physics, in which case Biology would scale better than Physics for that year.

In fact, the scaling algorithm makes it possible for the following extreme scenario. Imagine all the high performing students only do Mathematics Standard and the low performing students do Mathematics Advanced. This scenario would cause Mathematics Standard to be scaled better than Mathematics Advanced, despite it being the "easier" subject. Of course, the reality is that more high performing students choose to do Mathematics Advanced than Mathematics Standard.
I don’t think this is quite true, scaling is a combination of the difficulty of the exam and cohort ability. If you put, say, math extension 2 students into math standard and math standard students into math extension 2, math extension 2 is still going to scale better, the scores will just be very low, while the math standard scaling will be poor as usual because everyone will be scoring close to full marks
 

hschelper01

Active Member
Joined
Apr 6, 2020
Messages
168
Gender
Male
HSC
2019
So whoever is ranked last internally will automatically receive the lowest HSC examination mark, regardless of their performance in the HSC? Or is this only the case for whoever is dead last, then everyone in-between (not including whoever is first) is aligned a mark according to the cohort's performance? Not sure if what I just said makes sense...I'm just very perplexed by the calculation of marks
It's all aligned and based on internal ranking..
 

Potato Sticks

Member
Joined
Oct 25, 2019
Messages
37
Gender
Undisclosed
HSC
2013
Just to clarify a technicality, scaling has nothing to do with how difficult a subject is. It is actually based on the strength of the subject's cohort in all their other subjects.

In layman's terms the reason that say Physics typically scales better than Biology is because there are usually 'smarter' people (when looking at their performance in other subjects) doing Physics than Biology. It is also possible to have one year where there are more 'smarter' people in Biology than Physics, in which case Biology would scale better than Physics for that year.

In fact, the scaling algorithm makes it possible for the following extreme scenario. Imagine all the high performing students only do Mathematics Standard and the low performing students do Mathematics Advanced. This scenario would cause Mathematics Standard to be scaled better than Mathematics Advanced, despite it being the "easier" subject. Of course, the reality is that more high performing students choose to do Mathematics Advanced than Mathematics Standard.
Actually, I just realised, that the new biology syllabus scaled way better than the old syllabus. This couldn’t possibly be due to biology students randomly being way better
 

quickoats

Well-Known Member
Joined
Oct 26, 2017
Messages
970
Gender
Undisclosed
HSC
2019
In fact, the scaling algorithm makes it possible for the following extreme scenario. Imagine all the high performing students only do Mathematics Standard and the low performing students do Mathematics Advanced. This scenario would cause Mathematics Standard to be scaled better than Mathematics Advanced, despite it being the "easier" subject. Of course, the reality is that more high performing students choose to do Mathematics Advanced than Mathematics Standard.
Not sure if I'm correct, but since all subjects are scaled relative to English performance in standard/advanced (which are reported on the same scale), wouldn't the 'new' maths standard scale even worse? In a hypothetical case, before, the average general maths student got a 70 while getting a 60 in English, whereas the average 2u student got a 70 (raw) while getting an 80 in English (a higher performing cohort on average). This means that 2U would scale further since it was relatively harder for the cohort to achieve higher marks. If the cohorts swap, the new general students would get 98 in maths while getting 80 in English, and the new 2U students would be achieving ~50 in maths with a 60 in English. A higher performing cohort in standard would actually lower the scaling since it was relatively easier to achieve such high marks. This would also appreciate the level of scaling in 2U on the upper end as everyone is clustered on the lower end. Of course this is oversimplifying but I think? it automatically stabilises the scaling.
 

quickoats

Well-Known Member
Joined
Oct 26, 2017
Messages
970
Gender
Undisclosed
HSC
2019
Actually, I just realised, that the new biology syllabus scaled way better than the old syllabus. This couldn’t possibly be due to biology students randomly being way better
I think it all comes down to relative performance compared to the compulsory 2 units of English (which are scaled the same). If we assume the 'average' biology student performs consistently in English year to year, the new syllabus may have resulted in marks across the board to be lower. This means that relatively, the new biology students of the same consistent strength in English, had worse marks in biology - this could mean the test was harder, or it could also indicate that the new 'average' biology student, whilst still decent at English, is just worse at biology.
 

Trebla

Administrator
Administrator
Joined
Feb 16, 2005
Messages
8,384
Gender
Male
HSC
2006
As a general recommendation, I would suggest people have a read of the scaling report from the UAC itself:

It might be complicated to understand the more complex mathematical details, but the general high level principles they explain are all there. I would caution taking what is said by secondary sources (such as tutoring websites) as truth, because even though they get like 90% of the explanations correct, there are are still some parts which are not consistent with what is laid out from the official UAC or NESA source material. These are usually confusions of correlation and causation.

I don’t think this is quite true, scaling is a combination of the difficulty of the exam and cohort ability. If you put, say, math extension 2 students into math standard and math standard students into math extension 2, math extension 2 is still going to scale better, the scores will just be very low, while the math standard scaling will be poor as usual because everyone will be scoring close to full marks
The scaling algorithm is carried afresh each year and is only driven by the data in that year. That means there is no pre-determined rule which says certain subjects scale better than others. Also, relative "difficulty" is not only a subjective concept but also something that cannot be properly quantified. If the scaling algorithm really did take into account the "difficulty" then what metric could it possibly use as a direct measurement of that when subjects aren't directly comparable?

On your example, some technicalities to point out:

- Extension subjects are treated slightly differently to 2 unit courses but use the same principle. Mathematics Extension 2 scaling is based on that cohort's performance in Mathematics Extension 1. Suppose the state average of the whole Ext1 cohort was 60%. Now take the average of the Ext2 students that sit within that Ext1 cohort as a subset. If that average was say 70% then Ext2 would scale better than Ext1. However, if that average was say 50% then Ext2 would scale worse than Ext1. There is no pre-determined rule in the scaling algorithm which says Ext2 must scale better than Ext1. Also, notice that at no point are the marks in the Ext2 course itself a factor in that consideration.

- Ignoring that technicality for your hypothetical scenario - if we swapped the Mathematics Standard and Mathematics Extension 2 cohorts, the fact is that the new Mathematics Standard cohort will still be scaled better than the new Mathematics Extension 2 cohort (see my explanation at the bottom for details). The only matter to note is that because the maximum is capped at 100 then any scaling uplift for that the new Mathematics Standard cohort would receive is small, but the final scaled marks would still be far higher than that of the new Mathematics Extension 2 cohort.

Actually, I just realised, that the new biology syllabus scaled way better than the old syllabus. This couldn’t possibly be due to biology students randomly being way better
Where are you getting this from? From the UAC scaling report, the scaled mean in Biology in 2019 was 51.8 and for Physics it was 61.0. Since scaled marks are marks are on a common scale by definition then the average student in Biology received a lower scaled mark than the average student in Physics in 2019.

Not sure if I'm correct, but since all subjects are scaled relative to English performance in standard/advanced (which are reported on the same scale), wouldn't the 'new' maths standard scale even worse? In a hypothetical case, before, the average general maths student got a 70 while getting a 60 in English, whereas the average 2u student got a 70 (raw) while getting an 80 in English (a higher performing cohort on average). This means that 2U would scale further since it was relatively harder for the cohort to achieve higher marks. If the cohorts swap, the new general students would get 98 in maths while getting 80 in English, and the new 2U students would be achieving ~50 in maths with a 60 in English. A higher performing cohort in standard would actually lower the scaling since it was relatively easier to achieve such high marks. This would also appreciate the level of scaling in 2U on the upper end as everyone is clustered on the lower end. Of course this is oversimplifying but I think? it automatically stabilises the scaling.
The scaling of Mathematics Advanced is not decided by the scores of the students in the subject itself. It takes into account the scores of the students in all the other subjects they took outside of Mathematics Advanced.

For illustrative purposes, suppose there are only three courses in the HSC that everyone does - Mathematics Standard, Mathematics Advanced and English.

In you first hypothetical example:
SubjectCohort averageEnglish average
Mathematics Standard7060
Mathematics Advanced7080

The scaling algorithm compares the English averages for Mathematics Standard and Mathematics Advanced. Since the English average is higher for Mathematics Advanced then it scales better than Mathematics Standard.

In your second hypothetical example, where the cohorts swap:
SubjectCohort averageEnglish average
Mathematics Standard9880
Mathematics Advanced5060

The scaling algorithm compares the English averages for Mathematics Standard and Mathematics Advanced. Since the English average is higher for Mathematics Standard then it scales better than Mathematics Advanced.

Now as you can imagine, once you extend this to multiple subjects and the fact that each student does a variety of subjects, this starts to get very complicated.
 

Potato Sticks

Member
Joined
Oct 25, 2019
Messages
37
Gender
Undisclosed
HSC
2013
Ah I think I see where the confusion came from, we define “worse scaling” to mean different things. By good scaling I meant a larger increase at comparable unscaled marks, so even if Math Extension 2 cohort had a lower mean MX1 score than the MX1 cohort, as long as their MX2 marks are low, the “uplift” amount caused by the scaling will be more than MX1, even if their final marks are not, But yes, in that case I do agree with everything you’ve said.
 

ultra908

Active Member
Joined
May 11, 2019
Messages
151
Gender
Male
HSC
2020
So whoever is ranked last internally will automatically receive the lowest HSC examination mark, regardless of their performance in the HSC? Or is this only the case for whoever is dead last, then everyone in-between (not including whoever is first) is aligned a mark according to the cohorts performance? Not sure if what I just said makes sense...I'm just very perplexed by the calculation of marks
You always get your own examination mark- however you performed in the HSC exam is the exam mark you get. However, if you rank last in the internal assessment, your internal assessment mark will be moderated to (within reason to keep the distribution the same) the lowest exam mark of your cohort.

Take this example. Suppose you rank last in your cohort and you get a 65 internal mark from your school. Then you ace the HSC, and get a 90 exam mark. You get your 90 exam mark.
Now NESA doesn't really know what a 65 from your school means. But the worst performing exam mark from your school was 60. Now NESA has a point of comparison, and your internal mark will be moderated to about 60. So your HSC marks will be internals (~60) and externals (90).
Now suppose the worst performing student for some reason got 20 exam mark, and everyone else performed the same. From my understanding, your internal mark won't suddenly be adjusted to 20, NESA will account for this anomaly.
 
  • Like
Reactions: c8

Trebla

Administrator
Administrator
Joined
Feb 16, 2005
Messages
8,384
Gender
Male
HSC
2006
Ah I think I see where the confusion came from, we define “worse scaling” to mean different things. By good scaling I meant a larger increase at comparable unscaled marks, so even if Math Extension 2 cohort had a lower mean MX1 score than the MX1 cohort, as long as their MX2 marks are low, the “uplift” amount caused by the scaling will be more than MX1, even if their final marks are not, But yes, in that case I do agree with everything you’ve said.
The size of the "uplift" on the raw marks depends on both the target scaled mean and where the raw marks sit for that subject.

It's better to think of scaling as a sort of "target seeking" algorithm. If for some reason the average of the Ext2 students' performance in Ext1 was lower than the average of the entire Ext1 cohort, then the scaled mean of Ext2 must be lower than the scaled mean of Ext1. Suppose these target scaled means for Ext1 and Ext2 are calculated to be 85% and 70% respectively. It doesn't matter how high or low the average raw marks look like in Ext2 itself, it must hit that 70% scaled mean.

For example, if the average raw mark in Ext2 was say 50% and the average raw mark in Ext1 was say 60%, then the "uplift" for Ext2 at the mean (50% =>70%) is actually smaller than the "uplift" for Ext1 at the mean (60% => 85%).

Generally speaking, the size of the "uplift" on the raw marks is technically not quite the right way to think about "how much" a subject scales. The reason being is that what actually happens is that the marks are first subjected to "standardisation" before any scaling actually happens. This standardisation is simply applying a linear function on the data so that all subjects have a common mean of 50% (and a common standard deviation of 24% - for those that do statistics this is a similar process to computing a Z score). All the relativities are still preserved after this process, but all subjects start with a common mean of 50%. What scaling does is then adjust the mean upwards/downwards from its initial 50% depending on how strong the cohort is in their other subjects. This means that the size of the "uplift" on the raw marks is actually a combination of the initial standardisation step and the scaling itself.

The most appropriate way to think about "how much" a subject scales better than another is to simply compare their scaled means. Generally speaking, a scaled mean above 50% suggests above the state average and a scaled mean below 50% suggests below the state average. If subject A has a higher scaled mean than subject B, then this suggests that the average student in subject A is ranked higher than the average student in subject B.
 

Users Who Are Viewing This Thread (Users: 0, Guests: 1)

Top