feat(statistics): store ranked_score & total_score under classic scoring mode (#68)

* Initial plan

* feat(calculator): add classic score simulator and scoring mode support

- Add ScoringMode enum with STANDARDISED and CLASSIC modes
- Add scoring_mode configuration to game settings
- Implement GetDisplayScore function in calculator.py
- Add get_display_score method to Score model
- Update score statistics to use display scores based on scoring mode

Co-authored-by: MingxuanGame <68982190+MingxuanGame@users.noreply.github.com>

* fix(calculator): apply scoring mode to TotalScoreBestScore delete method

- Update delete method to use display score for consistency
- Ensures all UserStatistics modifications use configured scoring mode

Co-authored-by: MingxuanGame <68982190+MingxuanGame@users.noreply.github.com>

* refactor(calculator): address code review feedback

- Move MAX_SCORE constant to app/const.py
- Implement is_basic() as method in HitResult enum
- Move imports to top of file in Score model
- Revert TotalScoreBestScore storage to use standardised score
- Apply display score calculation in tools/recalculate.py
- Keep display score usage in UserStatistics modifications

Co-authored-by: MingxuanGame <68982190+MingxuanGame@users.noreply.github.com>

* chore(linter): auto fix by pre-commit hooks

* Don't use forward-ref for `ScoringMode`

* chore(linter): auto fix by pre-commit hooks

* fix(calculator): update HitResult usage in get_display_score and adjust ruleset value in PerformanceServerPerformanceCalculator

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: MingxuanGame <MingxuanGame@outlook.com>
This commit is contained in:
Copilot
2025-11-08 20:56:29 +08:00
committed by GitHub
parent ef3a900de0
commit d9d26d0523
9 changed files with 144 additions and 9 deletions

View File

@@ -6,8 +6,10 @@ from typing import TYPE_CHECKING
from app.calculators.performance import PerformanceCalculator
from app.config import settings
from app.const import MAX_SCORE
from app.log import log
from app.models.score import GameMode
from app.models.score import GameMode, HitResult, ScoreStatistics
from app.models.scoring_mode import ScoringMode
from osupyparser import HitObject, OsuFile
from osupyparser.osu.objects import Slider
@@ -52,6 +54,57 @@ def clamp[T: int | float](n: T, min_value: T, max_value: T) -> T:
return n
def get_display_score(ruleset_id: int, total_score: int, mode: ScoringMode, maximum_statistics: ScoreStatistics) -> int:
"""
Calculate the display score based on the scoring mode.
Based on: https://github.com/ppy/osu/blob/master/osu.Game/Scoring/Legacy/ScoreInfoExtensions.cs
Args:
ruleset_id: The ruleset ID (0=osu!, 1=taiko, 2=catch, 3=mania)
total_score: The standardised total score
mode: The scoring mode (standardised or classic)
maximum_statistics: Dictionary of maximum statistics for the score
Returns:
The display score in the requested scoring mode
"""
if mode == ScoringMode.STANDARDISED:
return total_score
# Calculate max basic judgements
max_basic_judgements = sum(
count for hit_result, count in maximum_statistics.items() if HitResult(hit_result).is_basic()
)
return _convert_standardised_to_classic(ruleset_id, total_score, max_basic_judgements)
def _convert_standardised_to_classic(ruleset_id: int, standardised_total_score: int, object_count: int) -> int:
"""
Convert a standardised score to classic score.
The coefficients were determined by a least-squares fit to minimise relative error
of maximum possible base score across all beatmaps.
Args:
ruleset_id: The ruleset ID (0=osu!, 1=taiko, 2=catch, 3=mania)
standardised_total_score: The standardised total score
object_count: The number of basic hit objects
Returns:
The classic score
"""
if ruleset_id == 0: # osu!
return round((object_count**2 * 32.57 + 100000) * standardised_total_score / MAX_SCORE)
elif ruleset_id == 1: # taiko
return round((object_count * 1109 + 100000) * standardised_total_score / MAX_SCORE)
elif ruleset_id == 2: # catch
return round((standardised_total_score / MAX_SCORE * object_count) ** 2 * 21.62 + standardised_total_score / 10)
else: # mania (ruleset_id == 3) or default
return standardised_total_score
def calculate_pp_for_no_calculator(score: "Score", star_rating: float) -> float:
# TODO: Improve this algorithm
# https://www.desmos.com/calculator/i2aa7qm3o6