The Philadelphia School District was already paying former superintendent Arlene Ackerman close to $1 million to go away. But that was before Ackerman applied for unemployment benefits.Fox 29 has confirmed the former superintendent has filed for the same benefits that needy families seek.If the application is approved, Ackerman could rake in more than $2,000 a month, on top of her taxpayer-funded severance package.As the news of Ackerman’s unemployment claim rippled across Philadelphia, the reaction seemed all the same: stunned silence followed by outrage.While Ackerman is within her rights to seek the benefits, taxpayers we spoke with just couldn’t believe it.Ackerman walked away from Philadelphia schools with a pay-out of over $905,000 with two years left on her contract.Rosa Rivera was at school headquarters Tuesday afternoon to seek help for her struggling fourth grader and she was shocked when Fox 29 broke the news to her.It was a common reaction across the city.Ackerman, who left the district before school doors opened in September, earned $346,000 a year, and, if approved by the state, could take in about $573 in weekly unemployment benefits.And the School Reform Commission, the panel which runs city schools, will not contest her getting the benefits because that is written right into her separation agreement.A school source says the district got a copy of Ackerman’s unemployment claim just over a week ago.City resident Donna Sherry says she lives on $674 a month in disability.”My reaction is shock,” she said.The benefits have not been approved by the state Department Of Labor and Industry and the agency doesn’t talk about individual claims Ackerman directed us to her attorney, Dean Weitxman, who today said Ackerman is legally entitled to unemployment
Key Listen: 34:00 to 58:00 (PART 1) for The School Turnaround group presentation to the BOE. Of note, the state board members are asking tame and ineffective questions predicated on the rosy portent of success. They need to be asking what is going on in these schools not how can we ram into more schools. Mr. Heffernan, who is usually on point with questions, really mailed this one in….
State Board of Education Meeting – Audio – 11/15/2011- Full Playlist
at the NYSE of course, ringing the bell
Common Core standards pose dilemmas for early childhood
By Samuel J. Meisels
After a decade of concerns and criticisms about the lack of rigorous national standards in the No Child Left Behind Act, we now have a set of ambitious standards for use nationwide — the Common Core State Standards. Since their formulation two years ago, these standards have been adopted by 45 states, were made a precondition for funding in the Race to the Top competition, and have begun to influence the development of new curricula and assessments. But early childhood education — concerned with children from birth to the end of third grade — seems nearly an afterthought in the standards. Not only do they end (or begin) at kindergarten, ignoring more than half of the early childhood age range, they simply don’t fit what we know about young children’s learning and development.
No one, including early educators, can afford to overlook standards. They’re critical for setting pedagogical goals and helping us know where we’re going instructionally and what we can hope to accomplish once we get there. They’re essential for establishing reasonable expectations or benchmarks for teaching and for deciding which curriculum to follow.And they’re fundamental to conducting meaningful assessments.
In some ways, you could say that we can’t live — or at least teach effectively — without standards.
But it’s not clear that early educators can live with them — or at least not live very comfortably with the way the Common Core standards are constructed.
First, they are “top down.” Work on the standards began at the end of the chronological range, that is, at the level of college and career readiness, and then was successively calibrated downwards by age and grade. By the time the authors came to K – 3, there was little room for flexibility. Some things that belong were omitted and some that don’t were included.
Second, they represent sky-high aspirations. Although many children will achieve the proficiency levels established by the Common Core, many more will not. When one recognizes that one-third of all students who took the 2009 National Assessment of Educational Progress reading test were unable to read at basic levels in fourth grade, you can see that these standards (which expect children to be able to read in kindergarten) are set at too high a level.
Third, the standards are profoundly incomplete. They consist only of English Language Arts and math, with a promise of science to come some time in the future.
What about socio-emotional development? What about approaches to learning and the arts? What about executive function and self-regulation? What about motor and physical development?
These are not unimportant domains of learning in early childhood; they are often the explicit path to achieving cognitive outcomes for young children. Don’t forget that much of the evidence from longitudinal studies of the impact of early intervention tell us, in economist James Heckman’s words, that “skills are multiple in nature. A proper accounting of human skills recognizes both cognitive and non-cognitive skills.” We ignore them at our peril.
Finally, no link between standards, assessments, and curriculumexists at present. This is an issue that goes beyond early childhood since curriculum and assessment without standards are blind and, to turn a phrase coined by the philosopher Immanuel Kant, standards without curriculum and assessment are empty.
We need all three elements — standards, assessments, and curriculum — to create a meaningful educational program in early childhood or at any level. If we only have standards, it’s like having a list of destinations without a map. We may be interested in reaching them, but we don’t have any idea of how to get there or any way of knowing if we’ve arrived.
In short, the Common Core standards pose a set of significant dilemmas for early childhood. Although they’ve been embraced by states across the nation, top-down standards such as those in the Common Core distort early learning. They’re not sensitive to the learning patterns of young children and they impute too many of the skills of older children to those who are younger. If the difficulty level of the standards is too high, they won’t be used, and if the domains covered by the standards are too narrow, they will have no lasting value.
Of all the problems posed by the Common Core, the most significant may be how easily the standards can be transformed from benchmarks into thresholds. Instead of simply setting goals for student learning, they have the potential of being treated as prerequisites for student achievement so that all children who don’t meet those requirements will be at risk of failure. In this way, a benchmark can be turned into a threshold, and a marker of achievement can become a barrier to learning.
The literacy scholar Elfrieda Hiebert reminds us: “When the steps are too big and when the capabilities of students do not match the size of the steps, the progression up the stairway of . . . complexity will likely be fraught with missteps and injuries.”
Because of the way the K – 3 standards are constructed at present, they are likely to identify only a small proportion of students as successful and may label many more as failures.
If this should come to pass, the real failure will lie with the standards and their implementation, not the students. The size of the stair matters in the race to the top. It is time to revise the standards before too many children trip over the threshold of learning.
Q: What famous delawarean has McKinsey roots?
A: Governor Jack Markell
Q: Does DE have McKinsey employees/affiliates in the DOE
It’s long but a good read……
Why the McKinsey Reports will not improve school systems
Frank Coffield offers the best summary statements of two papers’ flaws I’ve ever read.
The author discusses two McKinsey reports, How the world’s best-performing school systems come out on top, by Michael Barber and Mona Mourshed, [foreword by Michael Fullan, special Advisor on Education to the Premier of Ontario] September 2007 and How the world’s most improved school systems keep getting better by Mourshed, Chijioke and Barber, Dec. 2010 (launched with the supportive participation of the US Secretary of Education Arne Duncan). You can see the authors speaking at the launch here. And here is the US Department of Education announcement of Arne Duncan’s participation.
I admit to enjoying the takedown of Michael Barber and of ‘best practice.’
Reports which have achieved such global influence within a short time deserve the closest scrutiny. Yet when they are so examined, the first fails for at least four reasons: it is methodologically flawed; selective; superficial; and its rhetoric on leadership runs ahead of the evidence. The second, although it corrects some of the faults of its predecessor and offers a more elaborate explanation of success, still possesses six faults: it has an impoverished view of teaching and learning; its evidential base is thin; its central arguments are implausible; its language is technocratic and authoritarian; it underplays the role of culture in education and it omits any mention of democracy.
Below find excerpts from the article.
by Frank Coffiel
Institute of Education, Lifelong Education and International Development, University of
London, London, UK
In the last four years McKinsey and Company have produced two highly influential reports on how to improve school systems. The first McKinsey reportHow the world’s best-performing school systems come out on top has since its publication in 2007 been used to justify change in educational policy and practice in England and many other countries. The second How the world’s most improved school systems keep getting better, released in late 2010, is a more substantial tome which is likely to have an even greater impact. This article subjects both reports to a close examination and finds them deficient in 10 respects. The detailed critique is preceded by a few general remarks about their reception, influence and main arguments.
Since the publication in September 2007 of the first McKinsey report, How the world’s best-performing school systems come out on top, its conclusions have quickly hardened into new articles of faith for politicians, policy-makers, educational agencies and many researchers and practitioners, both in this country and abroad. The claim made by the two authors, Michael Barber1 and Mona Mourshed,2 (their study will from now on be referred to as the B and M report or the first McKinsey report3), is that they have identified the three factors behind ‘world class’
getting more talented people to become teachers, developing these teachers into better instructors, and ensuring that these instructors deliver consistently for every child in the system. (Barber and Mourshed 2007)
In the above quotation teachers have become ‘instructors’, although this could be explained as the use of the standard term for a teacher in North America. The report has been received as though it will ‘transform’ the performance of schools anywhere, irrespective of their culture or socio-economic status.
A Google search (on 31st January 2011), for instance, produced 2250 references to the first McKinsey report; and the entries include articles and reports from government agencies and researchers across the globe. The study has also been translated into French and Spanish and overwhelmingly its reception has been favourable, being described, for example, as a ‘a unique tool to bring about improvements in schooling’, by Schleicher (2007), Head of the Education Directorate at the OECD.
The second report has had far less time to garner plaudits, but Prof. Fullan, Special Adviser on Education to the Premier of Ontario, ends his foreword as follows:
This is no ordinary report . . . It will, by its clarity and compelling insights, catapult the field of whole system reform forward in dramatic ways. (Fullan 2010, 11)
The three themes identified by the first report constitute a serious oversimplification, but the two authors then claim that ‘the main driver . . . is the quality of the teachers’ (op cit: 12). So their explanation of a highly complex set of relations is reduced to only one factor, which has been seized upon by even well-seasoned researchers:
Evidence is accumulating from around the world that the single most significant means of improving the performance of national educational systems is through excellent teaching (e.g. Barber and Mourshed 2007). (Pollard 2010, 1)
Such a reaction is understandable given the derision which has been poured over the teaching profession for decades by politicians and the media (see Ball 1990). But unaccustomed praise must not blunt criticism. After all, did not Ausubel (1968, vi) argue: ‘The most important single factor influencing learning is what the learner
already knows?’ Moreover, it is a standard finding of educational research that, of the two main factors which influence test scores — the quality of the teaching and the quality of the student intake — the second is far more powerful. For example: 77% of the between school differences in student performance in the United Kingdom
is explained by differences in socio-economic background. Among OECD countries, only Luxembourg has a higher figure (OECD/average 55%). (OECD 2010b, 1)
In other words, the superior performance of ‘good’ schools is explained, first by the high quality of their student intake, and then by a host of other factors, including the quality of teaching. The argument should be about complexity and multiple causation, not about the overwhelming significance of one factor.
In the first White Paper on schools, entitled The Importance of Teaching, issued by the Tory-led coalition government in England in November 2010, the second B and M report is quoted approvingly seven times in the first 20 pages; and the Prime Minister and Deputy Prime Minister in their joint foreword to the report make use
of one of its summary statements: ‘The first, and most important, lesson is that no education system can be better than the quality of its teachers’ (DfE 2010, 3). The belated importance now being accorded to teachers is welcome, but the success of an education system depends on far more than one central factor.
The attention paid to the first McKinsey report can partly be explained by the anxiety displayed by politicians of developed and developing countries to compete successfully in the global knowledge economy. When they are told authoritatively that this can be done most effectively, not by reforming the quality of goods and services, but by making three (or perhaps only one) changes to their schooling system, then the prominence of the report’s conclusions in policy documents becomes easier to understand. Politicians are also understandably galvanised by the relative poor performance of their education system as measured, for example, by the surveys undertaken by the programme for international student assessment (PISA), run by OECD. The UK government responded to the latest international comparison by commenting:
. . . we are falling behind international competitors . . . we fell from 4th in the world in science in 2000 to 16th in 2009, from 7th to 25th in literacy, and from 8th to 27th in maths. (Cabinet Office 2011, 1.7)
The USA did little better, performing below the average in maths, ranked 25th out of 34 countries (OECD 2010a).
Politicians continue to draw a simple causal connection between investment in education and increased economic growth: for instance Hayes (2011, 1), the Minister of State for Further Education in England, argued recently: ‘Higher skills bring higher productivity’. The research evidence, however, was well summarised by
Wolf (2002, xii) almost 10 years ago: ‘The simple one-way relationship which so entrances our politicians and commentators — education spending in, economic growth out — simply doesn’t exist’. Yet the Department for Education in November 2010 sought to bolster its argument in favour of the relationship as follows: ‘As
President Obama has said: “the countries that out-teach us today will out-compete us tomorrow”‘ (DfE 2010b, 4).4 The main purpose of education is here reduced to improving the competitiveness of industry. A more comprehensive approach would be to argue that countries benefit economically from expanding education at all levels, but that the cultural, democratic and social goals of education are every bit as important.5 Grubb and Lazerson in discussing ‘the education gospel’ in the United States call upon public figures, politicians and commissions:
to moderate their rhetoric, difficult as that might be in the era of the sound bite. Simple claims for education should be replaced with a deeper understanding of what education and training can accomplish, and which goals require other social and economic policies as well. (Grubb and Lazerson 2004, 263)
The second McKinsey report, written by Mourshed, Chijioke and Barber, (hereafter called the MCB report or the second McKinsey report) provides, at more than three times the length of its predecessor, a much more considered explanation of continuous success. It had its American launch in December 2010 with the supportive
participation of the US Secretary of Education, Arne Duncan. MCB claim to have identified formerly unrecognised patterns of intervention among the 20 school systems they studied by means of 200 interviews with education leaders during visits to these systems, and by collecting data on nearly 575 interventions tried by those systems. They chose systems that had ‘achieved significant, sustained and widespread gains in student outcomes on international and national assessments from 1980 onwards’ (Mourshed, Chijioke, and Barber 2010, 17). ‘Gains in student outcomes’ is, however, shorthand for rises in test scores in only three subjects: reading, maths, and science. The chosen 20 were then divided into two sets: the first set of ‘sustained improvers’ included countries like Singapore and England, provinces like Ontario in Canada, and districts like Long Beach in California; the
second set of seven ‘promising starts’ included countries like Chile and Ghana and provinces like Madhya Pradesh in India and Minas Gerais in Brazil.
The 20 systems were then placed into one of five performance stages — poor, fair, good, great6 and excellent — according to their test scores on a number of international assessments (e.g. PISA, TIMSS, PIRLS, etc.7), which were converted onto a ‘universal scale’ (op cit: 130) with the following cut-off points: ‘poor’ equates
with scores of less than 440; ‘fair’ with scores of 440-480; ‘good’ 480-520; ‘great’ 520-560 and ‘excellent’ more than 560, a score not achieved by any of the 20 systems studied. It was, however, reached by Finland which was excluded, without explanation, from the study. So the only country in the world to achieve educational
‘excellence’ according to this methodology was omitted.
One of their main claims is that each of these performance stages ‘is associated with a dominant cluster of interventions, irrespective of geography, culture or political system’ (op cit: 24), and these clusters are briefly represented in Figure 1. In all four stages from poor to excellent the reins of power remain firmly in the hands of
the centre and the ‘strategic leader’, although the role of the centre and the leader changes from highly prescriptive direction to granting more autonomy to those teachers and schools which the centre considers to have merited it. As school systems move across the continuum from poor to excellent, the centre also changes
from ‘prescribing adequacy’ to ‘unleashing greatness’ (op cit: 52) by introducing such measures as school self-evaluation and professional learning communities. Assessing the performance level of a school system (anything from ‘poor to excellent’), is the first of five strategies for improvement. The other four are: choosing
the appropriate interventions for that stage; adapting those interventions to the particular contexts of the system (history, politics, culture, etc.); sustaining the improvements made; and ‘igniting’ the reforms, that is, how to get them started. [Figure 1. Five stages from poor to perfect. Based on Mourshed, Chijioke, and Barber
See Figure 2 for an overview of these five strategies, which consist of 21 factors in all. Within three years the task of improving school systems has become decidedly more complicated with three factors being cited in the first report but 21 in the second; and yet, as this article will argue, it remains far more complex than even this
more elaborate scheme allows for.
[Figure 2. Five improvement strategies. Based on Mourshed, Chijioke, and Barber (2010).]
The structure of this article Reports which have achieved such global influence within a short time deserve the
closest scrutiny. Yet when they are so examined, the first fails for at least four reasons: it is methodologically flawed; selective; superficial; and its rhetoric on leadership runs ahead of the evidence. The second, although it corrects some of the faults of its predecessor and offers a more elaborate explanation of success, still possesses
six faults: it has an impoverished view of teaching and learning; its evidential base is thin; its central arguments are implausible; its language is technocratic and authoritarian; it underplays the role of culture in education and it omits any mention of democracy. These failings are listed in Figure 3.
The four deficiencies in the first report Methodologically flawed The B and M report contains two methodological weaknesses. First, it is in essence a comparative study of 25 school systems, but it does not compare like with like.
The English system (with 23,000 schools) is constantly being compared unfavourably with Alberta (4000 schools), Singapore (351 schools) and Boston (150 schools).
To deal just with the category of size, the challenges faced by a national system of over 20,000 schools are of a different order than those faced by a city with 150. Besides, the English system has different aims and values; what it wants from its schools is very different from what Singapore wants.
[Figure 3. 10 Weaknesses.]
Other dimensions could be mentioned: there are, for instance, far more languages (over 100) spoken in primary schools in Tower Hamlets (a London borough) than in the whole of Finland (about five). Comparison, no matter how crude, has, however, become a means of governance: ‘Comparison is used to provide evidence that legitimises political actions’ (Ozga 2009, 158). There is not sufficient acknowledgement of the complexities involved in attempting to derive lessons from another country (never mind 25), because of enormous differences in
educational history, politics, socio-economic conditions, culture, and institutional structures.
Second, the manner in which B and M discuss the National Literacy Strategy in England does not meet two important criteria of research ethics. In the first place, Michael Barber does not declare a personal interest in his presentation of the statistics which, it is claimed, show ‘dramatic impact on student outcomes . . . in just
three years’ (Barber and Mourshed 2007, 27). During those years he was Chief Adviser to the Secretary of State for Education on school standards in England. In short, he is defending his own record, and the trust of readers who do not know of his involvement is being abused. In the second place, he fails to include in his account any of the publicly available data which flatly contradict his claim. A more independent judgement is called for.
Dylan Wiliam, an internationally acknowledged expert in assessment, has studied the attempts by successive English governments to raise student achievement and concluded that they ‘produced only marginal improvements’ (Wiliam 2008). In more detail, the largest rises in test scores in literacy took place before the strategy
was introduced in 1999. A modest improvement took place after the first year of the strategy, but the test scores flatlined in subsequent years. Much the same pattern can be discerned in the test scores for numeracy. But the most interesting data refer to the test scores in science, which performed best of all, but had no national strategy.
What conclusion can be drawn? Teachers became highly skilled at preparing their students to take tests long before the national strategies in literacy and numeracy were introduced. These two interventions proved largely ineffective but were hugely expensive at a cost of Â£500 million. Both these findings should have been
included in the B and M report.
Members of the international education community deserve to be given the full rather than such a partisan account. When they are not so treated, in good faith they represent a highly contentious claim as an established research finding: see, for example, Wei, Andree, and Darling-Hammond (2009, 3).
In their preface, the authors state: ‘We have chosen not to focus on pedagogy or curricula, however important these subjects might be in themselves’ (op cit: 9). The McKinsey report claims to be an international benchmarking study of school improvement and yet it omitted to study what subjects the schools were teaching or how they were taught. But it is not only pedagogy and curricula which are absent. Their analysis lacks any discussion of: governance and policy; discrimination whether of class, race, religion or gender; parental influences on education; how culture and teaching come together as pedagogy; or the aims and purposes of education.
One of the tacit assumptions in the report is that ‘best practice’ can be readily identified and disseminated throughout a school or a system. The notion of ‘best practice’ is referred to no less than seven times so it is a central plank in their argument. Yet even a casual acquaintance with the extensive literature on transfer would have alerted the authors to this enduring problem which has puzzled researchers since William James’s experiments in the 1890s and which continues to do so (e.g. Grose and Birney 1963; McKeough, Lupart, and Marini 1995; Eraut 2002).
There are two classes of difficulty — one concerning the identification of ‘best practice’ and the second with its dissemination. It is not possible to say that a particular practice is ‘good’, ‘best’ or ‘excellent’ in all settings, on all occasions and with all students. The criteria or norms by which these judgements are made are rarely explicitly stated. How can one ‘best practice’ cope with the immense diversity of local contexts and individual needs? The terms used, ‘good’, ‘best’ and ‘excellent practice’, are also ambiguous, flabby and used interchangeably (see Coffield
and Edward (2009) for more on this theme). The notion of one ‘best practice’ also betrays a misunderstanding of the situated nature of learning. As James and Biesta (2007, 37) argue:
Because of the relational complexity of learning and of the differing positions and dispositions
of learners [and, I would add, of teachers], there is no approach that can ever guarantee universal learning success.
Instead, B and M make the extraordinary claim that ‘best practices . . . work irrespective of the culture in which they are applied’ (op cit: 2), but they offer no evidence to support it.
Disseminating ‘best practice’, usually by means of cascade training, is also fraught with difficulties. The approach may appear to be intuitively sensible, it is a cheap way to seek to influence large numbers of teachers, and it operates in the main with existing staff. Evidence for its effectiveness is, however, very hard to come by.
The outcomes tend to be described as inconsistent (e.g. Adult Learning Inspectorate 2007), which is a euphemistic way of saying patchy or difficult to discern. The conclusion of most empirical studies can be expressed as follows: what begins as a cascade at the centre becomes a trickle in the classroom (e.g. Hayes 2000). Its critical weakness, however, is that teachers at the receiving end are passive in relation to the content and process of the ‘best practice’; but they tend to exert their professional independence by appearing to comply, while adapting, ignoring or rejecting topdown reforms (see Coffield et al. 2008). Instead, teachers who ‘own’ the innovation, who are both the originators and recipients of ‘good practice’, tend to learn from other teachers in equal partnerships, based on mutual trust (see Fielding et al. (2005) for a fuller explanation). Galbraith (1992, 27) memorably captured the similar difficulty with the trickle-down theory of wealth: ‘If the horse is fed amply with oats, some will pass through to the road for the sparrows’. In more prosaic language, if you are concerned about the welfare of sparrows, feed them directly.
The rhetoric on leadership runs ahead of the evidence
B and M claim ‘The evidence suggests that strong school leadership is particularly important in producing improvement’ (op cit: 30). The main source for this statement is a study by the national centre for school leadership (NCSL) in England, which in 2006 produced Seven Strong Claims about Successful Leadership (Leithwood et al. 2006). The NCSL is an interested party in any discussion on the effectiveness of leadership, as it has to justify its existence to the government which funds it.8 The NCSL report presents, however, far less compelling evidence than its title would suggest. It concludes, for example, that ‘research on school leadership has generated few robust claims’ (Leithwood et al. 2006, 15). It also admits that ‘leadership explains only 5–7% of the difference in pupil learning and achievement across schools’ (op cit: 4). Certainly, 5-7% will affect a substantial number of students, but it would be preferable to work with the factors which explain the other 93-95%, even when the amount of variance covered by measurement error is discounted.
Unlike B and M, Hartley (2007, 2009) has reviewed the extensive literature on leadership and concluded:
attempts to show a direct causal relationship between leaders’ behaviour (be it distributed or otherwise) and pupils’ achievement have yielded little that is definitive . . . the policy is ahead of the evidence. (2007, 204)
The style of leadership favoured by the first McKinsey report, which describes principals as ‘drivers of improvement in instruction’, appears to be of the strong, hierarchical type, where aspiring heads ‘shadow top private sector executives’ (op cit: 31). The claim appears to be that there are context-independent qualities of leadership, which enable captains of industry to run FE colleges successfully; or equip heads of outstanding schools in the leafy suburbs to repeat their success in the inner city. B and M are in danger of resurrecting the myth of the hero-leader or heroinnovator:
the idea that you can produce, by training, a knight in shining armour who, loins girded with new technology and beliefs, will assault his organisational fortress and institute changes both in himself and others at a stroke. Such a view is ingenuous. The fact of the matter is that organisations such as schools and hospitals will, like dragons, eat hero-innovators for breakfast. (Georgiades and Phillimore 1975, 315)
The six deficiencies in the second report
Before introducing these weaknesses, a brief comparison between the two McKinsey reports is called for. The second report is a considerable improvement on the first and accords a much enhanced role to collaboration which becomes ‘the main mechanism for improving teaching practice and making teachers accountable to
each other’ (op cit: 4). Fullan’s latest work is well integrated into the argument: ‘Collective capacity generates the emotional commitment and the technical expertise that no amount of individual capacity working alone can come close to matching’ (quoted by Mourshed, Chijioke, and Barber 2010, 84). Fullan however, then introduces
the notion of ‘collaborative competition’ as a force for change, where educators are expected to ‘outdo themselves and each other’ (op cit: 138). This is a clever-sounding oxymoron dreamed up by a policy advises to chivvy teachers.
The MCB report also has the virtue of asking two vital questions: ‘How does a school system with poor performance become good? And how does one with good performance become excellent?’ (op cit: 2). This time their answers are more intricate and include the claim that:
six interventions occur equally at every performance stage for all systems . . . building the instructional skills of teachers and management skills of principals, assessing students, improving data systems, acilitating improvement through the introduction of policy documents and education laws, revising standards and curriculum, and ensursng an appropriate reward and remuneration structure for teachers and principals. (op cit: 3)
This statement is clearly a major advance on the earlier argument that the quality of teaching is the single most significant factor, but where is the evidence for the claim that ‘six interventions occur equally at every performance stage for all systems?’ The second selection of factors remains an oversimplification of the complexities involved; and it represents a continuation of command and control from the centre despite repeated talk of decentralisation; and it suffers from a fatal omission. The six deficiencies in MCB will now be described.
Impoverished view of teaching and learning
Despite references to a ‘system’s pedagogy’ (op cit: 99), the MCB report contains neither an explicit view of teaching and learning nor a vision of education. Their implicit model can, however, be pieced together from the metaphors they employ. For instance, they discuss ‘the transmission of effective teaching strategies’ (op cit: 48); ‘the best delivery methods’ (op cit: 50) and ‘once a teacher had adopted the right approach . . .’ (op cit: 88). These remarks suggest that the authors adhere to the acquisition model of learning, where the minds of learners are viewed as containers to be filled with knowledge, but they give no indication that there are other competing models such as participation (Lave and Wenger 1994), construction (Evans et al. 2006) and becoming (Biesta et al. 2011). The acquisition metaphor characterises the growth of knowledge in students as a step-by-step process of gaining
facts, skills and understanding — the equivalent of steadily walking upstairs. It frequently is so, but every so often the breakthroughs students make in their thinking are more like the leaps salmon make in foaming rivers. We need more than one metaphor as Sfard (1998) has argued.
Moreover, the belief in one right approach to teaching needs to be rejected. The authors also approve of a shift in ‘emphasis on what teachers teach to one on what students learn’ (op cit: 89). This has in recent years become a policy cliche used by technocrats who are far removed from the classroom. The alternative is to treat teaching and learning as part of a single process, as the two sides of the same coin, where teachers and students are partners in learning who work together in harmony and who are both involved at different times in teaching and learning. The second report, in claiming that teachers ‘deliver’ facts and skills to students, betrays an impoverished conception of teaching and learning. The later change in emphasis to students’ learning is welcome, but it continues to treat teaching and learning as two separate processes.
A thin evidence base
MCB do not locate their findings within the relevant literatures; there is no bibliography, only nine incomplete references to other books and articles and only two to policy documents. There is no mention of the large, critical bodies of research on cascade training (e.g. Wedell 2005), on the transfer of training (e.g. Tuomi-Grohn and Engestrom 2003), on the psychology and sociology of teaching and learning (e.g. Bernstein 1996; Wenger 1998; Daniels 2001; Hart et al. 2004; Illeris 2007; James and Biesta 2007; Rudduck and McIntyre 2007; Ball 2008; Coffield 20109).
The authors also approve of management exercising control by means of continuous flows of performance data: witness their supportive description of Aspire, a set of American charter schools in California: ‘At the heart of Aspire’s implicit values is a rigorous attention to data-driven improvement. The system has an almost
religious commitment to empirically (sic) analysis of what works in practice and then applying it’ (op cit: 87). There is nothing wrong in asking the perfectly reasonable question: what works? The problem is that the answer is invariably complex. MCB also seem unaware of the highly critical literature on the ‘what works’ approach. Gert Biesta, to give but one example,10 has criticised:
the whole discussion about evidence-based practice [being] focused on technical questions – questions about ‘what works’– while forgetting the need for critical inquiry into normative and political questions about what is educationally desirable. . . . From the point of view of democracy, an exclusive emphasis on ‘what works’ will simply not do. (2007, 21-22)
According to Ozga (2009, 149), the Department of Education in England claims to be moving towards self-regulation through self-evaluation, thus giving the appearance of deregulation, ‘but the centre retains control through its management and use of data, and local government remains peripheral’. Management by means of data
has, according to Ozga, become a powerful instrument of policy and ‘governing through data is leaner, simpler and less heavy and mechanical in its use of power [than command and control], but it is still insistent and demanding’ (op cit: 159). This is not to denigrate the importance of data — better a gramme of evidence than a kilo of supposition — but to question control being increasingly exercised through relentless demands for data.
The authors’ handling of statistical terms is also potentially misleading. On pages 26, 34, 52, 60 and 135 the report talks of a ‘strong’ or ‘striking’ correlation or it refers simply to ‘a correlation’. But the report uses the term ‘correlation’ as if it means a direct causal relationship between the ‘tightness of central control’ and the stage of improvement of a school system (op cit: 34), when the term strictly means no more than an association.
Implausible central arguments
Why are the interventions located at particular stages in the ‘improvement journey’ from poor to excellent? For instance, the strategy (of releasing teachers from administrative burdens by providing administrative staff) is allocated to the final stage of moving from great to excellent and as such is apparently part of a ‘unique intervention cluster’ (op cit: 36). Does this mean that teachers, considered to be working in systems described as poor, fair or good, should not be released from such burdens? If they were to be so released, would their journey from poor to fair, or fair to good, or good to great not be all the shorter and less stressful? No explanation is offered as to why this intervention should be used only at the final stage. All teachers, at whatever stage their school system is in, need to be freed from as much administration as possible so that they can concentrate on the learning needs of their students. Moreover, teachers are burdened with these administrative tasks because of the demands for data made by management and the state.
Criticisms could also be made about the positioning of other interventions. For instance, the detailed prescription of teaching objectives, plans and materials by the centre for a ‘poor’ system is highly likely to drive any self-respecting teacher out of the profession; and the more talented the teachers, the less likely they will put up
with being told what to do. According to MCB, teachers should wait until their system is declared ‘great’, at which point the centre accords them ‘pedagogical rights’ to choose how they will teach (op cit: 36).
A further concern. What evidence is there that the schools within any of these systems all deserve the same grade at the same time? The authors produce none. Is it not more likely that, the larger the school system, the more schools will be spread out along the continuum from ‘poor’ to ‘excellent’? Given the growing polarisation of educational outcomes in countries like England, it would not be surprising to find concentrations of ‘excellent’ and of ‘poor’ schools within the same system, and even pockets of ‘excellently’ and ‘poorly’ performing students within the same school.11
What should have been central to the arguments in the two McKinsey reports is a considered response to the central finding of educational researchers that ‘pupil prior attainment and background explain the vast majority of variation in school outcomes’ (Gorard 2009, 761). If these ‘global’ policy analysts were to respond to that consistent finding, which they seem not to know about, their recommendations would be very different. . .
Other sections of this report include a discussion of:
- Technocratic and authoritarian language (Jargon)
- The role of culture underplayed
- The omission of democracy
- Final Comments
1. Sir Michael Barber was formerly a teacher, an official of the National Union of Teachers and professor of education at the Universities of Keele and London. He is currently an Expert Partner in McKinsey and head of its ‘Global Education Practice’ (sic). He was Chief Adviser on Delivery to the British Prime Minister, Tony Blair. His latest coauthored book is called Deliverology 101: A Field Guide for Educational Leaders.
2. Dr Mona Mourshed is also a partner in McKinsey and leads their education practice, covering the Middle East, Asia and Latin America.
3. McKinsey and Co is an international management consulting firm.
4. The US Department of Education is currently running an $4 billion competition called The Race to the Top, where states submit bids outlining their plans for comprehensive education reform; specifically, states need to show how they will improve education in science, technology, engineering and maths, ‘driven by an economic imperative’ (Robelen 2010, 6).
5. I suspect that Braun (2008, 317) is right when he argues that the interest of policy-makers’in education stems largely from an appreciation of the role of human capital development in economic growth’.
6. To be ‘great’ apparently is not as good as being ‘excellent’.
7. These acronyms refer to international assessments of student attainment in tests. PISA, program for international student assessment; TIMMS, trends in international maths and science study; and PIRLS, progress in international reading literacy study.
8. The Tory-led coalition government in England decided in 2010 to seriously reduce funding for the NCSL as part of its austerity measures.
9. I have deliberately given only a few key references to these substantial research literatures to avoid including long lines of names and dates, so those given should be taken as gateways to rich fields of knowledge.
10. Again, I have restricted myself to one reference in the text, but interested readers could also consult Phil Hodkinson’s (2008) valedictory lecture at Leeds University on the same theme. The authors of the McKinsey reports need to acknowledge that such criticisms of their ideas exist and they also need to respond to them. One of the criteria that Michaels, O’Connor, and Resnick (2008) have proposed for educational debate is accountability to the learning community: reports should ‘attend seriously to and build on the ideas of others’. This criterion will need to be addressed if there is to be a third McKinsey report on school systems.
11. The PISA results for 2009 usefully discuss ‘resilient students’, that is, those who come from the bottom quarter of the socially most disadvantaged students, but who perform among the top quarter of students internationally. In the UK ’24% of disadvantaged students can be considered resilient’ and 31% is the OECD average (OECD 2010b, 7).
12. The three authors are committed to improving the quality of learning, but seem to have no concern for the English language. At times, their prose is clumsy: ‘school systems’ performance journeys’ (op cit: 123); and ‘incremental frontline-led improvement’ (op cit: 60). At other times they introduce ugly neologisms: ‘architecting tomorrow’s leadership’ op cit: 27); and ‘schools . . . not only outperform . . . but also “out-improve”‘ (op cit: 132).
13. Another surprising omission concerns the relative absence of comment in either report on information technology. This prompts me to ask: how many classrooms did the three authors observe?
Notes on contributor
Frank Coffield is emeritus professor of education at the Institute of Education, University of London.
Remember the Jetsons? Poor George, a computer engineer, was the foil of his stereotypical tough boss, the bombastic Mr. Spacely, owner of Spacely’s Space Sprockets. To be fair, Mr. Spacely, even with his Adolph Hitler mustache wasn’t intentionally evil, just meddlesome. He was a business owner trying to compete with his rival, Cogswell. Together Cogswell and Spacely got in a heap of trouble and George always took the heat.
So, it wouldn’t surprise me that if Spacely received a delivery of bad parts, he’d send them back and insist on a credit to boot. He knows he has to build a superior product – a Sprocket.
What if Spacely had been a Principal Spacely in a school when he got that same delivery of bad parts – children, who from the moment they were conceived were at-risk. Principal Spacely still has to deliver a solid product – a high achieving school. But, he can’t send these bad parts back, can he?
And that’s where the business model being inflicted upon the PreK-12 education system becomes a guaranteed failure. Yelling “Jetson” isn’t going to fix it. And yet – pro-business reformers, RTTT champions, continue to claim the failures of the education system sit squarely on the shoulders of bad teachers (they would have you believe all teachers are bad teachers) and unions are at fault. It’s divide and conquer.
Enjoy your Holiday…with a little history lesson….
Delaware’s Department of Education outlines its FY 2013 budget request
Delaware Secretary of Education Dr. Lillian Lowery says the Department of Education’s FY ’13 budget request includes contingency plans.
Delaware Dept. of Education Secretary Lillian Lowery says DOE has been carefully looking at its spending.
DOE’s operating budget request of nearly $1.2-billion would be an increase of 4.98-percent from the FY ’12 budget. The capital budget request tops $116-million, up less than one-percent from a year ago but substantially down from pre-recession levels.
“We build our budget two ways. We build it with one percent increase and we build it with one percent decrease,” Lowery said. “We have to be able to mitigate for both of those.”
Lowery highlighted for OMB staff some of the initiatives in public schools over the past year: making the SAT available during the school day to every high school junior; hiring data coaches for teachers and leadership and development coaches for principals; identifying and working with ten Partnership Zone schools most in need of improvement; and establishing the DCAS assessment system to replace the DSTP.
The state is getting $119-million in federal funding through 2014 through its winning Race to the Top application to carry out some of these initiatives.
The Department of Education’s spending plan had the backing of several school administrators who attended the session in Dover. However, Delaware Association of School Administrators Executive Director G. Scott Reihm sounded a cautionary note.
“Federal stimulus funds were used to fund the state budget gap for public education the last two years. These funds have expired and have not been replaced,” Reihm said. He added that since Fiscal Year 2008, state funding has been eliminated for such areas as reading resource teachers, math specialists, school-based discipline programs, extra-time programs and others. Reihm called for funding restoration in the form of “flexible spending appropriations, so that each district has the ability to utilize the funding where it is most needed in their respective district.”
Additionally, Reihm noted that property reassessments were last conducted in New Castle and Kent Counties in the mid-1980s, and in the early-1970s in Sussex County. “Because districts rely on such outdated local property assessments, this has become a stagnant funding source as well as created a large disparity between districts in terms of equity,” Reihm stated.
Budget requests presented during November’s OMB hearings will be used in the crafting of Governor Markell’s recommended budget, which will be presented in late January. Following that, the General Assembly’s Joint Finance Committee will conduct hearings to examine the spending plans of each agency, department and educational institution. FY ’13 begins July 1st, 2012.
My Comments in RED.
Two years ago, Delaware’s teachers, school leaders, and other stakeholders came together with the common purpose of developing a plan to improve our schools (how many? sign in sheets? as a percent of all stakeholders? parents? In a meaningful way? Business Roundtable?). Those stakeholders have worked to implement that plan, which included a policy change making student growth a significant part of educator evaluations. That policy was developed with critical input and support from teachers (who?, how many? percent of total teachers in DE?) and other stakeholders (School Boards were shanghaied, save a few sane souls…..) , and it was a key piece of the state’s Race to the Top plan, which the U.S. Department of Education ranked first in the country (if Arne Duncan calls you first in something, then something’s probably wrong) and funded with $119 million. (Is this supposed to be a good thing? How many teachers signed onto this plan KNOWING that criterion 5 would tie their reviews to the work of other teachers? This claim definitely doesn’t pass the sniff test.)
As a former teacher, principal and superintendent, I understand the challenge of creating an educator evaluation system that fairly measures strong work and identifies areas for improvement. My classroom experience is why as Secretary of Education, I value most the feedback I get from educators in our schools. Most of the teachers from whom I have heard over the past three years have said they agree with their evaluation being based in part on student achievement (really? I find that hard to believe, can you shed light on this with numbers?) — provided that the student achievement measure is fair and based on student growth (ah, the disclaimer and reason this will fail).
Until now, we have not had the tools to provide fair, growth-based measures of student achievement (Are you saying we do now? Is there a peer reviewed research basis for this claim?). For years, Delaware’s approach to measuring student achievement asked the wrong question. The old Delaware Student Testing Program took one snapshot and asked: How did this year’s class of children score at this one point in time compared to a different set of children from the year before? Thousands of educators and parents argued that the real question should be how much students learned, looking at where the students began the year and where they ended it.
They were right. That’s why Delaware launched the growth model Delaware Comprehensive Assessment System to replace the DSTP last year. DCAS may be given up to four times per year, allowing for measurement of student growth within the school year. And the deeper data that DCAS provides throughout the year allows teachers to adjust their instruction to better meet the needs of their students. (It is a better test, I’ll grant that for sure)
In addition, we are working with hundreds of teachers (and they still felt the need to say the work in progress is a freight train of disaster coming?, doesn’t sound like a whole lotta collaboration there…..) across the state to develop growth measures for DCAS and non-DCAS tested subjects. Last spring, to ensure that Delaware had the time needed to appropriately develop these growth measures, we requested and the U.S. DOE granted a one-year extension to the timeline for full implementation of the student achievement portion of educator evaluations. With technical assistance provided by assessment experts, classroom educators are spending this year developing comparable and fair measures of student growth for every grade and subject.
As a result of these and many other efforts, by September 2012, when the new teacher evaluation system takes full effect, we will have student growth measures for virtually every teacher in every graded subject area (READ: it’s coming and I don’t care if you think it is flawed or sucks, just deal with it); we will have identified performance measures for non-graded educators; we will have two years of implementation data from our statewide assessment; and we will be better able to measure satisfactory growth in many subjects and grades.
Some teachers are concerned about an approach incorporating student performance. As someone who spent many years in the classroom, I understand their hesitation. (really? because if you did the next word in this essay WON’T be “but”) But (Damn, busted!) the ability to identify teachers who excel and those who need additional support is critical to better meeting students’ needs.
Now that we have a starting and finish line, and a way to measure the distance a child travels academically, we have an obligation to look at what works best to help students cover the greatest distance from the start of the year to the end.
How can we teach the rigor of math and science, but not be rigorous in examining student outcomes? How can we teach students the importance of critical thinking and close reading and not take a critical look at student performance results? (Wow, the absolute BEST way to teach children the importance of critical thinking would be to demonstrate your knowledge that standardized tests don’t tell the story of a child.)
Not doing so would forfeit an enormous opportunity to learn what works well and how we can build on it.I welcome all our teachers, particularly those with concerns about this approach, to join us in this work. (you have, they have, it’s just not working out like you thought) After all, we all share the same goal: ensuring the best teaching and learning in our classrooms.
One last question: why is this editorial written Dr. Lowery and not by Diane Donohue?
17 of Delaware’s 19 district’s Education Associations signed this letter. I guess Dover’s got work to do….because Jack has said, it’s happening next year like it or not…..and Governor Markell thinks it’s just me with a brain on this issue…….Kudos to these brave leaders for stepping up!
Educator accountability plan is flawed: Delaware Online.
Why do our nation’s children enter school increasingly less prepared to meet the rigors of the school environment, even as over 10,000 educators in Delaware model the importance of hard work and responsibility on a daily basis, and advise their students and parents as to what best practice looks like?
Teacher accountability, a political hot potato, has become a quick and easy way to hold one group solely responsible for the breakdowns in our society. No single group of shareholders — parents, educators, higher education, business or legislators — can be the single cause or the single solution of perceived ills of Delaware’s education system. Everyone plays a part.
Where are the elements of parental or student accountability?
The current system of teacher accountability, provided by the state Department of Education, is commonly called the Delaware Performance Appraisal System, revised, or DPAS II-R. Teachers are evaluated on all five components of the DPAS II-R, also called Component 5: planning and preparation, classroom environment, instruction, professional responsibilities and student improvement. This last component seems like the easiest one to measure and document: Test students when they arrive in the classroom, test them when they leave, and the difference is the growth.
Yet upon close scrutiny, what is considered adequate yearly growth, which qualifies a teacher for an effective rating, cannot be quantitatively established with any degree of certainty. It varies based on students’ academic level when they come through the door, on parental involvement and on the focus and condition of the child when he or she sits in front of the computer screen to take the state tests, just to name a few.
How do we measure student improvement for teachers and specialists who are not directly instructing students in DPAS II-R tested areas? These educators include nurses, speech therapists, special-needs teachers and a host of nontested subject area teachers. How do we measure the growth of special-needs’ students, whose goals are often behavioral or socially based? We have to create measuring sticks of progress for all these educators as well.
The state DOE solution is somewhat complicated, as all educators in a building — regardless of area of instruction — will have 30 percent of their overall student improvement score based on the entire school’s math or reading score, whichever is higher. For example, part of a school nurse’s student improvement score would be based on schoolwide reading or math scores.
Two additional parts to Component 5, which are “in the works,” includes having teachers in nontested areas select a “cohort” of students whose test scores would count against that individual’s overall rating. So for the current year, an educator can earn an “exceeds,” “satisfactory” or “unsatisfactory” rating and not face any negative consequences, based on “yet to be developed” Component 5 rubrics.
For 2012-13, educators face improvement plans and more negative consequences for being labeled unsatisfactory. Educators from all areas could potentially be placed on improvement plans should the entire school, or their cohort, perform poorly on tests they neither administer nor have any direct instructional impact on.
Imagine the exodus of teachers to schools where parental involvement is strong or pressure on educators to find a cohort of strong test-takers would suddenly take center stage.
While all educators understand the need to be held accountable, all measuring tools, in any other profession, are based on a consistent and reliable set of variables. Children are not variables; they do not arrive at school every day with the same level of readiness and cannot be easily quantified by data collectors.
Current research shows standardized tests have narrowed our curriculum to near irrelevance, as an increasing amount of our energies are used to ensure allegiance to a particular strand of knowledge. Success beyond the secondary level is much more dependent on the broad ability to think, problem-solve and communicate effectively.
Component 5, while intended to measure teacher effectiveness, simply hides the fact that students are human beings and bring ever-changing needs to our schools.
So, while we understand the need to put something in place to measure teacher effectiveness, we should no more measure teachers in the proposed fashion than we would measure doctors’ effectiveness when their patients fail to follow their medical recommendations.
We ask Delaware’s legislators and school board members to please reconsider this flawed plan and work with all the stakeholders to create a more fair and inclusive system of accountability.
This is what happens when a great team and a great leader get together and we just provide resources and get out of the way!
How About Better Parents?
IN recent years, we’ve been treated to reams of op-ed articles about how we need better teachers in our public schools and, if only the teachers’ unions would go away, our kids would score like Singapore’s on the big international tests. There’s no question that a great teacher can make a huge difference in a student’s achievement, and we need to recruit, train and reward more such teachers. But here’s what some new studies are also showing: We need better parents. Parents more focused on their children’s education can also make a huge difference in a student’s achievement.
Josh Haner/The New York Times
How do we know? Every three years, the Organization for Economic Cooperation and Development, or O.E.C.D., conducts exams as part of the Program for International Student Assessment, or PISA, which tests 15-year-olds in the world’s leading industrialized nations on their reading comprehension and ability to use what they’ve learned in math and science to solve real problems — the most important skills for succeeding in college and life. America’s 15-year-olds have not been distinguishing themselves in the PISA exams compared with students in Singapore, Finland and Shanghai.
To better understand why some students thrive taking the PISA tests and others do not, Andreas Schleicher, who oversees the exams for the O.E.C.D., was encouraged by the O.E.C.D. countries to look beyond the classrooms. So starting with four countries in 2006, and then adding 14 more in 2009, the PISA team went to the parents of 5,000 students and interviewed them “about how they raised their kids and then compared that with the test results” for each of those years, Schleicher explained to me. Two weeks ago, the PISA team published the three main findings of its study:
“Fifteen-year-old students whose parents often read books with them during their first year of primary school show markedly higher scores in PISA 2009 than students whose parents read with them infrequently or not at all. The performance advantage among students whose parents read to them in their early school years is evident regardless of the family’s socioeconomic background. Parents’ engagement with their 15-year-olds is strongly associated with better performance in PISA.”
Schleicher explained to me that “just asking your child how was their school day and showing genuine interest in the learning that they are doing can have the same impact as hours of private tutoring. It is something every parent can do, no matter what their education level or social background.”
For instance, the PISA study revealed that “students whose parents reported that they had read a book with their child ‘every day or almost every day’ or ‘once or twice a week’ during the first year of primary school have markedly higher scores in PISA 2009 than students whose parents reported that they had read a book with their child ‘never or almost never’ or only ‘once or twice a month.’ On average, the score difference is 25 points, the equivalent of well over half a school year.”
Yes, students from more well-to-do households are more likely to have more involved parents. “However,” the PISA team found, “even when comparing students of similar socioeconomic backgrounds, those students whose parents regularly read books to them when they were in the first year of primary school score 14 points higher, on average, than students whose parents did not.”
The kind of parental involvement matters, as well. “For example,” the PISA study noted, “on average, the score point difference in reading that is associated with parental involvement is largest when parents read a book with their child, when they talk about things they have done during the day, and when they tell stories to their children.” The score point difference is smallest when parental involvement takes the form of simply playing with their children.
These PISA findings were echoed in a recent study by the National School Boards Association’s Center for Public Education, and written up by the center’s director, Patte Barth, in the latest issue of The American School Board Journal.
The study, called “Back to School: How parent involvement affects student achievement,” found something “somewhat surprising,” wrote Barth: “Parent involvement can take many forms, but only a few of them relate to higher student performance. Of those that work, parental actions that support children’s learning at home are most likely to have an impact on academic achievement at school.
“Monitoring homework; making sure children get to school; rewarding their efforts and talking up the idea of going to college. These parent actions are linked to better attendance, grades, test scores, and preparation for college,” Barth wrote. “The study found that getting parents involved with their children’s learning at home is a more powerful driver of achievement than parents attending P.T.A. and school board meetings, volunteering in classrooms, participating in fund-raising, and showing up at back-to-school nights.”
To be sure, there is no substitute for a good teacher. There is nothing more valuable than great classroom instruction. But let’s stop putting the whole burden on teachers. We also need better parents. Better parents can make every teacher more effective.
There’s not a similar problem in Rhode Island, Smith said. She described the problem in Delaware as being specific to this state.
“In the case of Delaware, there just wasn’t the demand. … It was not at a scale that we could have a cost-effective program,” Smith said.
A tough economy increased competition for teaching jobs, which is one reason fewer than expected Delaware Teaching Fellows were placed in schools, Ruszkowski said. But that’s not the singular reason the program will not continue, he said. Other factors included a late start and not enough time to build relationships with schools that might hire these teachers. Given more time, Ruszkowski said, the program would have been successful. Really? Then why quit it, seriously? Where’s the fierce urgency of now? Thanks for wasting my tax dollars……..
That’s a view shared by Paul Herdman, president and CEO of the Rodel Foundation of Delaware, a nonprofit involved in education reform. The idea of creating a new pipeline of teachers is a good one. There is a need to grow the number of high-quality applicants for hard-to-staff subjects and schools, he said. Just because this effort did not take off does not mean the idea of it was unfounded, he said. What else is there to say about it I guess so this sounds pretty good? Huh? Just more proof that Edreform is a gimmick filled fail fest. So sad though that these efforts are failing kids when real solutions like more resources and less programmatic static in our schools is easily more obtainable with $119MM and just a few less rubrics and failed measurements…..maybe better to go back to the choice strategy…..I can’t even laugh anymore this is so sad……
“Everything we try isn’t always going to work,” Herdman said. An experienced voice speaks.
About $290,000 in state money was spent on stipends for 24 fellows who completed the summer institute, which resulted in 14 of those new teachers being successfully placed in Delaware schools. The Rodel Foundation of Delaware contributed about $215,000 to the effort. About $350,000 in federal grant money remains for what was to be spent on the program. The remainder of the money will be reallocated to other statewide programs in a way that meets the state’s grant obligations, said Alison Kepner, a state Department of Education spokeswoman.
The Delaware Teaching Fellows program was one component of the state’s effort to increase the number of teachers available to academically struggling schools and in tough areas to staff, such as special education. Several other similar programs remain in existence, including a Science, Technology, Engineering and Math (STEM) residency program for those who worked in one of those fields and opt for a new career as teachers. The state also has several districts and charters that work with Teach for America, a program that places top college graduates who did not necessarily major in teaching.
The 14 Delaware Teaching Fellows placed successfully in schools will continue through the program, receiving support and training. Those teachers were placed in schools including Indian River and Christina and some charter schools.
At Indian River, superintendent Susan Bunting said her staff was pleased with the teachers that were hired. Vision mouthpiece comes through with clutch quote!
“We will miss that opportunity, it was good for us,” Bunting said.
Sussex Central High School principal Jay Owens said he was disappointed to learn there will not be another round of teachers trained through the program. He found those trained through the program to be bright, articulate and knowledgeable about the art of teaching.
“I am pleased with the people that I have seen,” Owens said. “They embraced our vision for Sussex Central.”
What an unprofessional coward.
- We are ideologically opposed to RTTT.
- While we are school board members in Delaware, we do not send this note in that capacity, but rather as tax payers in good standing.
- This note is also being shared with interested parties to include both house and senate education committee chairs in Delaware, the statewide education reporter at the DE newspaper of record and several other interested parties in Delaware education. Our goals are to ask for some clarification, inform the public, challenge our current path in order to demand results, and to ask the parties on this e-mail to offer answers and solutions to this pressing issue that calls for immediate consideration.
Educational Evaluation and Policy Analysis, Vol. 29 (4), December 2007, Document No. PP07–121.)”
- Why is USED offering guidance for 300 hours, then not enforcing that sensibility within the scope of SIG application review or offering adequate $$ to achieve the task?
- Why is DDOE providing LEA’s with a SIG application and more importantly and specifically, the scoring rubric for said application, that awards maximum points for a 10% ILT component (ILT is mandatory in PZ schools) as 10% ILT does not equate to, nor exceed the 300 hours stipulated by SIG as efficacious)?
- Why is DE using RTTT/SIG monies for targeted ELT (that may get a very small cohort of students to 300 hours/yr) when federal SIG guidelines prohibit 1003(g) dollars to be used in a targeted fashion of ELT (must be for all students in school)?
- Why are we deploying federal ARRA dollars in a manner inconsistent with the spirit of federal guidelines on ELT/ILT in such a way that it could be construed as willfully wasteful?