Thanks! I think in the future, we should have a class template for these problems that would make it easy to check each contestant's code programmatically. I'm guessing this year, people did not follow some uniform API pattern to solve each problem, where to grab the inputs, etc. It would be nice if we all followed some standard so no one has to go through checking these by hand.

It feels weird to me that the rules should be changed to disallow Python for this year, as that would actively cost Robert Cemper his 3rd place prize, but on this it's of course up to the organizers.

As for your second point, I think it is correct that having the IRIS wrapper around pure Python code would be a bit pointless, and it would also be strange for InterSystems to sponsor such a event. I think the main selling point of doing these problems in CoS is a flexibility of working with multidimensional arrays and the flexibility of this data structure to solve challenging and intricate problems.

After the holidays, could we get an update on the use of embedded Python for these contests?

I always got the feel from InterSystems events like this that we are trying to push for greater product visibility to those outside the InterSystems/CoS/Mumps umbrella, and that these contests welcome new people to take part in these technologies. Please correct me if I'm incorrect in this assumption- if this is the case, the rest of this post is just nonsensical rambling.

I don't fault anyone who used it during the contest; for example, I see from scrolling up that @FabianHaupt asking for clarifications on additional rules for the code in this thread, which I guess could have been hinting at this feature (which didn't receive any public follow up). However, it makes me feel strange that this is both allowed and no announcement was made about it.

From my perspective, the first rule which says "...present the code in InterSystems ObjectScript in UDL form..." would automatically make most new contestants write in CoS only, when its more likely (from a probabilistic perspective) that they have more experience and would have done much better with Python. Obviously, the fact that embedded Python in IRIS as a feature isn't a state secret, but I still feel like new users are disadvantaged from knowing about it.

I wrote a script to calculate what the scoreboard looks like if we only account for people using COS (using @Robert Cemper's list). That is, I used the provided JSON data to form my own sorted lists of solve times for part 1 and part 2 for each question, discluding users not present in Robert's list. Scores are then recalculated based on each user's position in the sorted lists and summed up (which should match how they are calculated by AoC).

Ah, I am using the same json inputs as you (freshly downloaded a few seconds before the script was run). The thing with only sorting the scores from the json file is that people who didn't solve in CoS shouldn't really reflect the "actual standings" for the contest. That's why the scores are so different- for example suppose the leaderboard only consists of the three users "cos_coder_1", "cos_coder_2", and "non_cos_coder". My script effectively removes the effect that "non_cos_coder" has on the leaderboard (e.g. on days where they got first, the other two users effectively get first/second).

The overall decreased scores is because this means we consider fewer participants as "active".