The primary aim of this study was to evaluate the reliability of the University's Masters' level (M-level) generic assessment criteria when used by lecturers from different disciplines. A further aim was to evaluate if subject-specific knowledge was essential to marking these dissertations. Four senior lecturers from diverse disciplines participated in this study. The University of Teesside's generic M-level assessment criteria were used and formatted into a grid. The assessment criteria related to the learning outcomes, the depth of understanding, the complexity of analysis and synthesis and the structure and academic presentation of the work. As well as a quantitative mark, a qualitative statement for the reason behind the judgement was required. Each lecturer provided a dissertation that had previously been marked. All participants then marked each of the four projects using the Mlevel grid and comments sheet. The study found very good inter-rater reliability. For any one project, the variation in marks from the original mark was no more than 6% on average. This study also found that subject-specific knowledge was not essential to marking when using generic assessment criteria in terms of the reliability of marks. The authors acknowledge the exploratory nature of these results and hope other lecturers will join in the exploration to test the robustness of generic assessment criteria across disciplines.