A first set of results were issued at around 10am. Of the 796 candidates sitting, 591 passed and 205 failed.
For question 20.2 the Examiner's report states:
“The statement 20.2 indicates that the material of D2 should replace the material of D1. This is not a valid argument within the framework of the problem solution approach, since D2 and not D1 is the closest prior art document. Hence, the answer to 20.2 is “False”.The formulation of 20.2 was however unnecessarily complex. Just by using the expression “D2 could be replaced by” instead of “D2 could replace” the solution would become “True”, since there is no teaching that solid wood could be replaced by the material of D1. For this reason, it is exceptionally decided to award marks for the answer “True” as well.”
Therefore marks were awarded to candidates for answering either true or false.
However, it appears that those candidates who answered false to this question actually had their marks double counted. At around 3pm a revised set of results was published, with around half the candidates scoring 2 points fewer compared with the results issued in the morning! The revised results meant that 575 candidates passed and 221 candidates failed.
At around 4:30pm, the results were revised back to the original set published in the morning. There are 16 candidates who don’t know if they have passed or failed.
This Kat can only imagine how distressing it must be for a candidate to be bounced backwards and forwards between pass and fail. Although he understands that this informal notification of results is not the binding outcome, (the results come with the disclaimer "Please note that we cannot accept liability for the information given. Only results as notified to you in your result letter are binding.") surely every care needs to be taken to ensure that they are correctly notified even at this stage.
Moreover, the fact that an ambiguity of this magnitude has arisen at this stage, and the fact that Delta Patents seem to have disagreed with the official answer in a number of further questions, together suggest that perhaps the formulation of the multiple choice questions has become too complex.
The IPKat wonders whether any readers have further news or insight into this troubling situation.
I suppose that the EQE has got no ISO 9000 certification and hence its quality is not assured.
ReplyDeleteThe situation is not knew, and has happened last year with Question 10. There it took the Board a long time for it to decide in favour of the candidates. It is only after the Board of Appeal decided in favour of candidates, that the examiners report was amended and candidates having filed an appeal, and those having suffered the same problem but not appealed, have seen their answer accepted as correct.
ReplyDeleteProbably the Board did not want to suffer the same blow from the BoA that it has acted rapidly. But at what price. It seems that in 2014 a last minute amendment to the paper caused the problem.
it will be interesting to see what will be the reason for this years...
In general it can be said that the questions in the pre-exam are too long and rather complex. This disadvantages heavily candidates who do not have for mother tongue an official language, as often there are subtleties in the way the question is drafted, and depending on the understanding of the candidates leads them to the wrong answer, although they know how to correctly deal with the question.
When the pre-exam is guinea-pigged, they should not take a native speaker in one of the languages or an examiner from the EPO, but a successful candidate from for example an Easter European Country or from Southern Europe.
Mr Bean says:
ReplyDeleteThese papers are prepared by a group of people wherein many wouldn't even be able to pass the examination themselves. They construct questions and situations based often on improbable situations which never occur in reality. For these questions they prepare the allegedly correct answers which in some cases are changed during the correction of the paper in order to save the face of the committee. . All this at expenses of candidates who are faced with enormous uncertainties and arbirtrary decisions.
Mr Bean: Just to set the record straight on the ability of the paper setters to pass the exam. The examination committees are made up of qualified EPAs and EPO examiners along with some members of the Legal Division. All have passed the EQE themselves (no grandfathers) and I understand that to sit on e.g. the Committee setting the legal paper D, one must have passed paper D without compensation at the time that one sat the EQE.
DeleteBuddha comments:
ReplyDeleteNo wonder that such mistakes happen knowing the level of the present Chair of the Examination Board (a Frenchman) and of the Principal Director responsible for the Secretariat (a Frenchman friend of Battistelli). What of a combination !
Not to mention the "Patent Academy" about which there are some interesting revelations here:
ReplyDeletehttp://www.dziv.hr/en/news/mrs.-ljiljana-kuterovac,-director-general-of-the-sipo-appointed-member-of-the-supervisory-board-of-the-epo-academy,151.html
and here:
http://techrights.org/2015/03/17/rikard-frgacic-response/
and I understand that to sit on e.g. the Committee setting the legal paper D, one must have passed paper D without compensation at the time that one sat the EQE.
ReplyDeleteOr perhaps be a friend of the EPO President ?
Mr Brophy,
ReplyDeleteGood - because in the past I'm pretty certain that was not necessarily the case for the EPO examiners who were rarely EQE qualified (it's only in relatively recent years that a significant number of examiners have sat the exam after the experience criterion was changed - mid 90's? - for them to be allowed to sit the exam based on experience of examining being recognised).
Mr. Brophy and Mr. Bean,
ReplyDeleteI lean towards Mr. Bean's view in critical part because Mr. Brophy is not looking at how the current exams are put together piece by piece and is in a very real sense comparing apples and oranges.
It is simply not a matter that any particular person who had passed some previous version of the test can be ASSUMED to be able to pass today's version. Instead, and directly to the point, it is today's version being built piece by piece that is DIFFERENT from yesterday's version - and in particular, that level of difference that is the focal point.
That being said, the fact of the matter is that practitioners only gain access to be able to practice at the hands of those who went before, and who have multiple - and competing - interests for setting whatever level "of the bar" that is set.
Of course the altruistic drive is that "the bar" is set high in order to protect the dignity and public esteem of the group as a whole. Too low a bar just invites scoundrels and poor practice, and this is one manner of self-policing for "higher quality." This type of policing - it should be noted - is ENTIRELY divorced from what may have been considered acceptable "yesterday," so the fact here that someone passed "yesterday's" bar has no relation to the ability of that same person to pass today's "bar."
One not-so-altruistic (but let's not kid ourselves as to its presence) driver is the self-serving limit on competition. Too low a bar invites more people into the group, those same people with whom those already having passed the bar will need to compete against for their lunch money. A very real part of legal realism MUST deal with this. In today's reality, the financial pressure is in fact compounded by several other factors: the demise of the billable hour and the awakening in the client base of its ability to force "cloistered" professionals to compete more like ordinary businesses is a huge factor. Another is the ever increasing encroachment of what is traditionally viewed as the practice of law by non-bar accepted professions - be this the "do-it-yourself NOLO versions, or merely the segmentation of law practice into ever finer pieces with more and more of those pieces being considered merely "administrative."
I find it beyond Pollyana to NOT consider these facets in a conversation such as this one.
"Too low a bar just invites scoundrels and poor practice, and this is one manner of self-policing for "higher quality." "
ReplyDeleteFor a brief moment I though that you were referrring to EPO senior management appointments ... :-)
...and I would add that I have even seen a recent move (one I do not support) to remove the technical requirement from the bar limiting those to be recognized as competent to practice patent law in the US, allowing those without a technical level to be registered with the United States Patent and Trademark Office on par with those having BOTH legal and technical capabilities.
ReplyDeleteI must profess that I am unable to grasp how anyone could think such a move would make the realm of patent law a better place, but I have seen such a thought being "championed."
Mr Bean comments:
ReplyDeleteI trust that what Mr Brophy states is true. If so, it is n improvement as in the past the Committee included anyone, regardless of EQE qualification. I assure you that not always the best examiners were included. In any case, even if EQE qualified, in my experience, some members of the committee have a sadistic tendency (perhaps in order to show how smart they are) to create awkward, tricky cases for which they themself would not have a final answer. During correction some smart candidates propose the real good answer which often differs from the one given by the Committee. This is adopted and this explains the changes in the markings. Not always a very professional approach. Can you deny that this occurs sometimes, Mr Brophy?
I'm not here to deny anything ;) My experience, several years ago now, was that a paper was set and a draft marking schedule agreed (effectively a model answer) and then when the candidates' answers came in, a large sample of them were marked and the examiners then met up to compare experiences and revise and finalise their marking schedules. Yes, in some cases, the candidates had answers that were unanticipated and at least arguably valid and if so, those answers were given credit in the final marking scheme. But I don't think that such a revision is a bad thing, in fact it's the opposite because it helps recognise that candidates might have come to unexpected but valid conclusions. After that final scheme every paper was then re-marked from scratch using that agreed making scheme which had spotted the surprises and possible flaws or ambiguities in the original scheme.
ReplyDeleteAs regards the question of self-interest and raising the bar, half the committee are, or were, EPI members who wanted good candidates to pass. In my experience there was little interest in raising the bar to high because after all many of the examiners fine from firms which generally want their own trainees to qualify and pass once they are competent to practice.
Incidentally, none of this is an apology or defence of the apparent changing of marks discussed in Darren's post. I agree with his comments and personally think that for a multiple choice paper, the pre exam trends to have more ambiguity than I would expect, as evidenced by the fact that the very knowledgeable guys in Delta Patents got several answers "wrong", according to the (hopefully) final marking scheme.
As far as I can remember the members of the examination committees were not over-enthusiastic about the multiple choice pre-exam. However, the EPO management introduced it to render the examination more efficient with little consultation of the committee members. (anybody has already heard of efficiency and lack of consultation in the context of EPO reforms?).
ReplyDeleteFormer committee member
This for the benefit of anybody like the US Anon, unfamiliar with the EPO's "pre-EQE" exam.
ReplyDeleteThink of the qualification process to become a European Patent Attorney as an EPO college entrance exam and then, years later, an EPO college graduation exam. The exam we are talking about here is the one you need to pass to get you into college. Just that.
Few candidates graduate from this school. But it is thought that with an entrance exam first to pass, the level of heartbreak and wasted work by finals examiners can be reduced.
David,
ReplyDeleteFrom a fellow former EQE examiner,
You comment "...revise and finalise their marking schedules. Yes, in some cases, the candidates had answers that were unanticipated and at least arguably valid and if so, those answers were given credit in the final marking scheme."
It is for that very reason a multiple choice paper was a bad idea. A tick in a box gives no clue as to why that box was was ticked and so no chance to correct the marking schedule in a considered way.
It would have been far better to simply separate the two parts of paper D and have D1 as the pre-exam.
D1 tests whether the candidate turns up on the right day with the right books, knows how to navigate the books; and critically whether the candidate has put in adequate preparation to be able to answer simple questions. That is a pre-exam.
The high level skills of D2 (which tests whether the candidate is fit to give advice that is not dangerous) have never made a good match with the low level skills of D1.
Meldrew and Former Committee Member, I tend to agree with both of you. My possibly faulty memory of the introduction of a pre-exam (I too was gone by the time it came in) was that many assumed D1 could and should be used as the qualifier paper. It was always much faster to mark anyway and you usually had a pretty good idea of the candidate's knowledge and preparedness when you were halfway through marking a D1 paper.
ReplyDeleteMaxDrei @ 9:59,
ReplyDeleteThanks.
Your post while enlightening, makes no difference at all to the actual points that I introduced with my post.
@THE US anon:
ReplyDeleteI think MaxDrei's post certainly does make a difference once you also take into account that in the first few years of the pre-exam practically nobody failed it. It clearly had to be made less easy to pass or it would be completely useless.
2012: 5 failing out of 390
2013: 2 failing out of 643
2014: 107 failing out of 648
2015: 221 (or 205) failing out of 796
It seems normal that the Examination Committee needs multiple tries to get the level "right".
an EU anon,
ReplyDeleteWhy is it that "someone must not pass"....?
You operate on an assumption.
Further, the point of whether or not the "must not pass" (the assumption you operate upon) still does not reach the comments that I made.
So with all due respect, it appears that you too, are making a statement that does not touch what I actually stated.
Mind you that adding to the conversation with a different point is all fine and good. However, as I point out, responding to MY post with what are evident non sequiturs to the points I present is not really engaging those points, now is it?
@THE US anon:
ReplyDeleteSomeone must not pass, because there is overwhelming empirical evidence that many candidates sitting the "real" EQE exam are (or at least used to be) severely underprepared.
The purpose of the pre-exam is to reduce the number of those underprepared candidates, so that fewer resources are wasted on marking their hopeless papers. So it's simply a cost-saving measure.
I do indeed operate on the assumption that there are many underprepared candidates sitting the pre-exam. But I don't see a reason to doubt this assumption. It has always been so for the real exam, so why would it suddenly be different for the pre-exam?
Regarding the comments you made. The only point I can extract from your post of 21 March 13:59 is that the profession might be trying to limit competition by making the exams harder.
In general, that is certainly something to keep in mind. But as regards the pre-exam, taking into account its limited purpose and the numbers over 2012 to 2015, I really see no evidence that any wish to limit competition is at play.
However, as I point out, responding to MY post with what are evident non sequiturs to the points I present is not really engaging those points
Could you please concisely formulate your points? And do those points that I have not addressed relate at all to the EQE Pre-exam uncertainty that is the topic of this blog post? I see no reason why I or anyone else should address, for example, the abstract possibility that the profession might wish to limit competition if that has no connection with the topic at hand.
@EU anon: In 2012 and 2013, you only needed 50 marks to pass. Nowadays you need 70. They didn't really change the exam. Passing rates in the main exam for candidates that scored <70 in the pre-exam are 0% for both the 2012 and the 2013 group. The decision to raise the bar to 70 marks seems to be a logical step.
ReplyDelete2012: 37 scored <70, i.e. about 10%
2013: 74 scored <70, i.e. about 12.5%
2014: 14%
2015: 25%
This year seems to be significantly more difficult than the previous years. But also A, B, C and D exams are not equally difficult every year. For a new type of exam, the difficulty level will always vary more than for a more established one.
The effect on the overall pass rate (pre-exam + main exam) will not be huge. Candidates who scored <80 in the pre-exams of 2012 and 2013 only had a success rate of 5-15% in the main exam.