I'd like to talk about various issues that go beyond these changes and make some suggestions for potential improvement. I believe that what has been discussed (and partially implemented) so far is mostly just a change of semantics, but ultimately this isn't enough and what is needed is a complete overhaul of the BN application process. The points presented in this post are based on long-term observations from myself as well as several other people and I'll try to summarize them below.
1) Current issues 5w1z37
1.1) The "3 mod showcase system" is generally not ideal because even just finding 3 maps that meet the desired criteria can be very difficult and time consuming. A lot of maps are incomplete or very low quality which already makes them unfitting for the application. Some others are very high quality and therefore don't offer a lot to work with. Those that fall somewhere in between often have lots of mods already which again reduces the amount of content that can be used to demonstrate one's modding skills. Usually the maps are also supposed to contain specific issues in order to cover all possible aspects of mapping as expected.
1.2) The expectations/requirements are also problematic due to the fact that they lead to "artificial" mods that often don't reflect actual mods done by BNs. Applicants don't just mod any map or make any kind of mod, over time a specific formula has been developed which is supposed to meet these requirements. However, this doesn't properly measure someone's modding abilities but rather the ability to figure out what exactly evaluators are looking for and adapt accordingly. If you took random mods made by BNs and used them in a BN application, the result would most likely be negative since there are no such expectations from them while being a BN, so expecting them from applicants doesn't make much sense and is not realistic.
1.3) This way of modding for BN applications can have other drawbacks as well, such as copying certain mods/suggestions from other people without understanding them and using them in a different context where they might not even apply. It has led to modding becoming quite homogenized because people are treating it like there is one right way to mod and wanting to learn that in order to become BN, so they are often blindly following other modders by pointing out the same type of issues, using the same reasoning, wording, etc. In reality there are many different "modding styles" and none is objectively better than the other. Moreover, by trying to check all the boxes, modders might focus too much on finding potential issues and subconsciously mentioning things that are fine or exaggerating minor issues.
1.4) The evaluation criteria are unclear/vague and there are hidden expectations so modders essentially have to take a guess as to what they should and shouldn't do. While the attempt to improve this aspect as discussed in this thread is a good start, I still have doubts whether it really works in practice. This might also have to do with the way evaluations are done though, which brings me to my next point.
1.5) Due to the subjective nature of mapping and modding, evaluations can differ widely from one NAT member to another. As such, there is a certain RNG component at play, which can make evaluations feel unfair. This also means that if an applicant has different views on map quality or on what is and isn't an issue in a map compared to an evaluator, it could impact their result negatively. While it might be impossible to avoid making judgements based on personal preferences, it should be reduced to a minimum.
1.6) Unfortunately evaluations are also prone to bias in multiple ways. Firstly, there might be a subconscious bias towards negative aspects since the task consists of checking for mistakes the applicant might have made, similar to what I mentioned above regarding modders focusing too much on finding potential issues in a map. Mistakes and shortcomings seem to hold significantly more weight than things that were done well, so even if most aspects are positive, it can still result in a failure. Another thing to note is that according to the evaluation process, if the majority of the evaluators votes "fail", it results in the applicant automatically being denied. However, the same is not true for a majority of "" votes, again indicating a tendency towards negativity.
1.7) The other form of bias consists in favoring people someone likes or is friends with, and on the other hand opposing people they dislike. This is exacerbated by the fact that evaluators are not always exclusively randomized, but they can also assign themselves to an application in order to substitute someone or as an additional evaluator, giving them the possibility to skew the result.
1.8) Generally the aforementioned inconsistencies in evaluations are demonstrated by occurrences like former BNs or even former NAT failing applications (or for example, Elite Nominators/former NAT being kicked/probationed) which understandably raise some questions. It seems unlikely that competent modders forget or unlearn their skills in a few months or 1-2 years, so either problems have been overlooked previously and they were seen as better than they actually are, or the assessments don't do a good job at determining someone's capabilities. Previous BN experience should play a bigger role when assessing a candidate.
1.9) I find it questionable that the behavior of future and existing BNs is assessed by the NAT because they are not specifically educated/trained on how to do this and might not always be able to make fair calls about what's right or wrong. As some have had incidents of misconduct themselves, they might not be the best candidates to judge how others act, and there have been examples of debatable decisions taken in this regard.
1.10) All of this ties into the fact that there are little to no checks or consequences for subpar or unfair evaluations. This is of course a result of the NAT's self-regulation, but I think in part it also has to do with the fact that applications and their result are not visible to the public, so they are not subject to community opinions like qualified maps are for example, which can be a form of quality control. Apparently it is now possible to allow applications to be viewed publicly, but I'm not sure where they can be viewed by other s (was this explained anyhwere?). The inability for decisions to be appealed can elicit feelings of powerlessness in applicants as well.
1.11) Whether the changes to how is delivered are beneficial remains to be seen. Either way, the problematic aspect is not necessarily the 's format, but more importantly its content. Issues are often explained poorly or insufficiently, making it hard to understand for the person reading it. The provided reasoning is sometimes overly subjective and not ed by facts or evidence, as well as generally lacking helpful information on how to improve. The different and potentially contradicting answers from evaluators when asking further questions only add to the confusion, but this should hopefully be mitigated by the new unified communication method.
2) Stats q4z2o
Next, I want to present and talk about some stats on the rate of BN applications. The data was taken on February 15th 2024 and is based on all-time evaluations from all current NAT . I can share the complete spreadsheet if someone is interested.
2.1) The first thing that stands out is the large discrepancy between the different game modes:
osu! (standard): out of 526 total evaluations 169 ed = 32,13% rate
osu!taiko: out of 164 total evaluations 68 ed = 41,46% rate
osu!catch: out of 190 total evaluations 135 ed = 71,05% rate
osu!mania: out of 272 total evaluations 155 ed = 56,99% rate
A possible reason could be the size difference between for example standard and catch, but the number of successful applications being less than half in the former is still a huge gap. And when comparing the significant growth of mania in recent times, it nearly reached the same amount of BNs as standard (even suring it briefly), but the percentages seen above still differ notably, so this is likely not the only factor (if at all). Taiko is also on the lower side here, I'm not sure if it's related to the fact that there are several newer NAT in this mode, but it just stuck out to me and also explains why there are not that many evals in total. So the question is: Is the skill level of modders across game modes so much different, is the learning curve higher or lower depending on the mode, or does each mode simply approach evaluations differently (stricter or more lenient)?
2.2) The other interesting aspect I noticed is how much the rates vary between individual of each mode. The most notable one is osu! standard, where the highest rate is 45,83% and the lowest only 18,00%, and they are not outliers either, as there are some other similar values for other people. The only other mode where the numbers differ significantly across evaluations is taiko (24,32%-57,14%), however both the highest and the lowest one are outliers. Both mania (48,28%-61,76%) and especially catch (70,27%-75,00%) are closer together, which (coincidentally or not) are exactly the ones with the highest rates overall.
3) Suggestions/ideas 6p6j5j
3.1) The core idea is similar to what has been posted by Shii
here, but with some modifications and additional details. Instead of submitting 3 specific mods, the applicant's recent modding history is looked at in general. However, the modded maps and their respective suggestions wouldn't be analyzed in a very detailed manner. It would just be for checking if the mods make sense, are explained in a comprehensible way, help to improve the map somehow, and if nothing major was missed. Smaller mistakes and more sophisticated modding abilites would not be relevant though, and no special criteria have to be met. This would filter out applicants who are clearly not experienced and skilled enough to mod maps on a BN level.
The advantage is that the process would become less stressful, time consuming and difficult, for both parties involved (at least to some degree). This would also put less focus on the application and more on the applicant themselves, resulting in more accurate and consistent outcomes.
3.2) If no major problems are found, the applicant becomes a trial/pseudo BN (a new group wouldn't be necessary) where they don't actually have any BN abilities like nominating or disqualifying maps, but they can place hypothetical nominations after completing their mod. This would either work by pressing an actual button on the map's discussion page if there is the necessary dev , or otherwise save the map as .osz file and submit it as nomination to the BN website. After a certain period of time or a certain number of nominations, they are evaluated and, if found competent enough, added as probationary BN. Otherwise they are removed and given the standard cooldown.
The idea behind this is the fact that practical experience is often the best way to actually learn how to do something. A good analogy is applying for a job: Usually you are not expected to know everything beforehand, there is a lot of stuff you just master while doing the job. Obviously someone without experience is not ready to take major responsibilities yet, but they can be guided to that point slowly.
3.3) The option to appeal BN app results should be added. The way it would work is that the applicant could file an appeal where they present reasons as to why they think the assessment was faulty. As long as the provided arguments are not completely unreasonable, it would be evaluated by previously uninvolved NAT . If they come to the same conclusion as the other evaluators, an explanation would be given to the applicant and they could not appeal it again. Otherwise, if the appeal is considered valid, it would be discussed by the entire NAT of the relevant game mode until a consensus is reached, or if no agreement can be found, a vote is held to determine the final outcome by simple majority.
This feature would address some of the issues mentioned above such as inconsistency, subjectivity and bias. If I correctly, it used to be a thing for existing BN evaluations (but I don't know how exactly it worked) and I think there were some cases where appeals were granted, but don't quote me on that.
4) Potential issues 5y474e
Of course there are also some potential issues and disadvantages to the ideas proposed above, so this is an attempt to identify, address and solve them.
4.1) As mentioned by RandomeLoL
here, there would probably be quite a lot of "pseudo BNs" being added which would increase the workload of the NAT considerably. A few possibilites have also been named before, such as increasing the minimum kudosu requirement to apply (to about 500 perhaps) which would also ensure the modder has at least some experience. Having a hard cap of applicants and/or BN additions at a time has also been brought up already. Other ways to alleviate the problem could be using the help from BNs to do evaluations, which is already in place now, but it could possibly be expanded further. Another aspect to consider is that it might only be a temporary issue because the number of total BNs would likely increase over time, which in turn means there's a larger group of evaluators. This of course only works if corresponding training programs for BNs are done consistently and successfully.
4.2) Many people would probably expect quality standards to drop a lot, but this is not necessarily the case. First and foremost, the fact that these new BNs would not be able to actually nominate maps is already a major safeguard. If deemed necessary, this trial period can even be extended for more than just 1-2 months. Another important preventive measure that has been neglected lately is quality assurance. If more people were actually checking, playing and reporting qualified maps, mistakes and quality issues can be reduced significantly, and given the right tools and incentives, there are definitely enough people willing to do these things. The short-lived Qualified Inspector project was honestly a great idea and although the fact that a new group won't be added, that should not be a reason to abandon the topic entirely. I think adding s who have this role to the BNs would be an acceptable alternative which would also grant them rewards like tenure badges. Additionally, I there being a conversation about more incentives to play qualified maps for players (as in non-mappers/modders) in order to detect problems more easily. I'm not sure what happened to that idea but even just making plays during qualified carry over to ranked (as long as the map is not disqualified) would surely increase the play count.
4.3) The trial/pseudo BN system could be demotivating because the people going through it don't actually get to nominate maps, which is usually the interesting and exciting part about becoming a BN. On the other hand though, I think this is still less demotivating than doing mods for BN applications, failing and starting over again, since that can also feel like a waste of time. A trial phase would at least give people the feeling of having accomplished something and making progress towards their goal.
4.4) Another valid concern is that for modders who are already capable of performing actual BN duties, this would just hold them back unnecessarily. A solution to this could be letting people skip to regular probation immediately if they are considered good enough.
4.5) Not picking and submitting certain mods could certainly be an issue if someone has recently made a mod that is incomplete or they didn't put effort in, as it would reflect badly on them. As a compromise, there could be the option to exclude certain mods from the application. Similarly, if someone thinks they did particularly well on a certain map, they could still be able to mention that, but just optionally.
--------------------------------------------------------------------------------------------------------------------
On a sidenote, I also recommend checking out this thread, it's quite interesting because there are some questions and concerns related to this topic:
https://www.reddit.com/r/osugame/comments/14nv46a/we_are_the_nomination_assessment_team_ask_us/