World Journal of Social Sciences and Humanities
ISSN (Print): 2474-1426 ISSN (Online): 2474-1434 Website: https://www.sciepub.com/journal/wjssh Editor-in-chief: Apply for this position
Open Access
Journal Browser
Go
World Journal of Social Sciences and Humanities. 2019, 5(3), 160-175
DOI: 10.12691/wjssh-5-3-6
Open AccessArticle

Monitoring and Evaluation Systems: The Missing Strand in the African Transformational Development Agenda

Vincent Kanyamuna1, , Derica Alba Kotzé2 and Million Phiri3

1Department of Development Studies, School of Humanities and Social Sciences, University of Zambia, Lusaka, Zambia

2Department of Development Studies, School of Social Sciences, University of South Africa, Pretoria, South Africa

3Department of Population Studies, School of Humanities and Social Sciences, University of Zambia, Lusaka, Zambia

Pub. Date: September 17, 2019

Cite this paper:
Vincent Kanyamuna, Derica Alba Kotzé and Million Phiri. Monitoring and Evaluation Systems: The Missing Strand in the African Transformational Development Agenda. World Journal of Social Sciences and Humanities. 2019; 5(3):160-175. doi: 10.12691/wjssh-5-3-6

Abstract

Today, monitoring and evaluation systems are structural arrangements many governments and other development agencies are building and strengthening to enhance their performance and as a way of demonstrating results to stakeholders. The systems are also used to meet internal information needs. The practice and commitment is more evident in developed than in developing countries. In many African countries, the practice and commitment towards implementing functional M & E systems is noticeably on the low side. Most M & E systems in Africa are still in their embryonic stage—not able to supply relevant information for stakeholder use. Even worse, the demand for M&E information by stakeholders, both internal and external is minimal among and across potential users in Africa. We have not seen a transformational resolve and thrive especially by governments and key development agencies to sustainably build and strengthen M & E systems in Africa. Nonetheless, for the African continent to face and resolve its several social, economic and political challenges, it is inevitable to dedicatedly engage in a transformational development agenda. Despite the gloomy M & E arrangements currently, there are notable efforts (though often fragmented) in some countries as well as in the continental and regional development blocs such as the AU, SADC, AMU, CEN-SAD, COMESA, EAC, ECCAS, ECOWAS, and the IGAD. This paper contends that commitment by African governments to building and sustaining M&E systems as an instrument of good governance should be top on the transformational development agenda—not rhetorically but pragmatically. Identified as the missing strand, M&E systems are deemed key to promoting and achieving the desired culture of results across the African continent. Troubled with endless and increasing reports on corruption and bad choices in development interventions due to lack of strategic prioritisation, M&E systems stand handy to offer evidence-based information to support sound decision making, policy formulation and implementation. Consequently, if Africa was not going to channel its political, organisational, human, technical, technological and financial resources towards transforming M & E in every country, the hope for a better Africa as enshrined in the continental Vision 2063 of the Africa We Want will remain a wish, only never to be realised. Essentially, a culture of results is something Africa and its people should cherish and pursue without thinking twice.

Keywords:
monitoring evaluation M&E system results-based management culture of results evidence-based whole-of-government M&E system results

Creative CommonsThis work is licensed under a Creative Commons Attribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/

References:

[1]  Naidoo, I. A. (2011). The role of monitoring and evaluation in promoting good governance in South Africa: A case study of the Department of Social Development: A Thesis submitted to the Graduate School of Public & Development Management in fulfilment of the requirements for the doctorate degree. University of Witwatersrand.
 
[2]  Pollitt, C. (1998). Managerialism Revisited. In Taking Stock. Assessing Public Sector Reforms. Canada (pp. 45-76). Canadian Centre for Management Development. McGill Queen`s University Press: Canadian Centre for Management Development. McGill Queen`s University Press.
 
[3]  Prennushi, G., Rubio, G., & Subbarao, K. (2001). Monitoring and Evaluation plus Annexes’. In PRSP Sourcebook. Washington, DC: World Bank,.
 
[4]  IEG. (2007). Sourcebook for Evaluating Global and Regional Partnership Programs: Indicative Principles and Standards. Washington DC, IEG World Bank: Washington DC, IEG World Bank.
 
[5]  Booth, D. (2003). Introduction and overview. Development Policy Review, 21(2), 131-159.
 
[6]  Booth, David, & Lucas, H. (2002). Good Practice in the Development of PRSP Indicators and Monitoring Systems. Overseas Development Institute, London.
 
[7]  Edmunds, R., & Marchant, T. (2008). Official Statistics and Monitoring and Evaluation Systems in Developing Countries: Friends or Foes? (October), 1-43.
 
[8]  World Bank. (1999). Monitoring and Evaluation Capacity Development in Africa: A High Level Seminar held in Abidjan, Cote d’Ivoire In November, 1998, Precis. A High Level Seminar Held in Abidjan, Cote d’Ivoire In November, 1998. OED, World Bank, Washington, D.C.: OED, World Bank, Washington, D.C.
 
[9]  Davies, H., Nutley, S., & Smith, P. (1999). Introducing Evidence-based Policy and Practice in Public Services. In H. Davies, S. Nutley, & P. Smith (Eds.), What Works? Evidence-based Policy and Practice in Public Services. Bristol: The Policy Press: Bristol: The Policy Press.
 
[10]  Kanyamuna, V., Mubita, A., Ng’andu, E., Mizinga, C., & Mwale, A. (2018). An Assessment of the Demand-Side of the Monitoring and Evaluation System of the Health Sector in Zambia. World Journal of Social Sciences and Humanities, 4(2), 75-86.
 
[11]  Mackay, K. (2006b). Institutionalizing of Monitoring and Evaluation Systems to Improve Public Sector Management. IEG World Bank, Washington DC.
 
[12]  GRZ. (2017a). 2016 Annual Progress Report for the Revised Sixth National Development Plan 2013-2016: People Centred Economic Growth and Development. Ministry of Fiinance and National Planning, Lusaka, Zambia.
 
[13]  Kaufmann, J., Sanginés, M., & Moreno, M. (2015). Building effective governments: Achievements and challenges of public management results in Latin America and the Caribbean. IADB, Washington DC: IADB, Washington DC.
 
[14]  Hardlife, Z., & Zhou, G. (2013). Utilisation of monitoring and evaluation systems by development agencies: The case of the UNDP in Zimbabwe. American International Journal of Contemporary Research, 3(3), 70-83.
 
[15]  GRZ. (2014b). Revised Sixth National Development Plan 2013-2016: People Centred Economic Growth and Development - Volume I. Ministry of Fiinance and National Planning, Lusaka, Zambia.
 
[16]  Mackay, Keith. (2007). How to Build M&E Systems to Support Better Government. In The International Bank for Reconstruction and Development / The World Bank.
 
[17]  Republic of South Africa. (2007). Policy Framework for the Government-Wide Monitoring and Evaluation System. The Presidency, Pretoria, South Africa.
 
[18]  Kanyamuna, V. (2019). Analysis of Zambia’s Whole-of-Government Monitoring and Evaluation System in the context of National Development Plans. Doctorate Thesis. University of South Africa.
 
[19]  Bamberger, M. (2010). Institutionalising Impact Evaluation. A key element in strengthening country-led monitoring and evaluation systems. In M. Segone (Ed.), From Policies to Results: Developing capacities for country monitoring and evaluation systems. Geneva: UNICEF.
 
[20]  Mosse, D., & Lewis, E. D. (2005). The aid effect. Giving and governing in international development. London: Pluto Press: London: Pluto Press.
 
[21]  Chabane, C. (2013). The Role of Monitoring and Evaluation in the Public Service, Republic of South Africa - The Presidency Department of Performance Monitoring and Evaluation. Pretoria, South Africa.
 
[22]  Weiss, C. H. (2000). Theory-based evaluation: Theories of change for poverty reduction programs. In O. Feinstein & R. Picciotto (Eds.), Evaluation and Poverty Reduction (pp. 103-111). Washington, DC: World Bank.: Washington, DC: World Bank.
 
[23]  Wong, C. (2012). Toward Building Performance - Oriented Management in China: The critical role of Monitoring and Evaluation and the Long road Ahead (No. 27). The World Bank, Washington D.C.
 
[24]  Hauge, A. O., & Mackay, K. (2004). Sharing best practices and lessons learned: Monitoring and Evaluation for Results - Lessons from Uganda.
 
[25]  Segone, M. (Ed.). (2009). Country-led Monitoring and Evaluation Systems: Better evidence, better policies, better development results. UNICEF, Geneva.: UNICEF, Geneva.
 
[26]  OECD/DAC. (2011b). Conclusions: Where next in monitoring and evaluation? Paris.
 
[27]  Fay Twersky. F, & Lindblom, K. (2012). Evaluation Principles and Practices: An Internal Working Paper. (December), 30.
 
[28]  Farell, K. M., Kratzmann, S., McWilliam, N., Robinson, S. S., Ticknor, J., & White, K. (2002). Evaluation Made Very easy Accessible, and Logical. Atlantic Centre of Excellence for Women’s Health.
 
[29]  Kahan, B., & Goodstadt, M. (2005). The Interactive Domain Model of Best Practices in Health Promotion: Developing and Implementing a Best Practices Approach to Health Promotion. Health Promotion Practice, 2(1), 43-67.
 
[30]  Republic of South Africa. (2008). Basic Concepts in Monitoring and Evaluation. Public Service Commission, Pretoria, South Africa.
 
[31]  Görgens, M., & Kusek, J. (2009). Making Monitoring and Evaluation Systems Work: A Capacity Development Toolkit. Washington DC, IBRD World Bank: Washington DC, IBRD World Bank.
 
[32]  Hlatshwayo, N. . ., & Govender, K. K. (2015). Monitoring and Evaluation in the Public Sector: A Case Study of the Department of Rural Development and Land Reform in South Africa. Asian Journal of Economics and Empirical Research, 2(2), 91-99.
 
[33]  Chambers, R. (1994). Paradigm Shifts and the Practice of Participatory Research and Development.
 
[34]  Estrella, M., & Gaventa, J. (1997). Who Counts Reality? Participatory Monitoring and Evaluation: A Literature Review. In International Workshop on Participatory Monitoring and Evaluation (Vol. 53).
 
[35]  Lopez-Acevedo, G., Krause, P., & Mackay, K. (Eds.). (2012). Building Better Policies.
 
[36]  Mackay, K. (2006a). Evaluation Capacity Development: Institutionalization of Monitoring and Evaluation Systems to Improve Public Sector Management. The World Bank, Washington, D.C.
 
[37]  Schultz, P. R. (2009). Monitoring and Evaluation Manual: Handbook on Monitoring and Evaluation of development projects. Danida, Denmark: Danida, Denmark.
 
[38]  Castro, M. F. (2009). Insider Insights: Building a Results-Based Management and Evaluation System in Colombia. ECD Working Paper 18, The World Bank, Washington, D.C.
 
[39]  Mackay, K. (2011). The Australian Government’s Performance Framework. The World Bank, Washington, D.C.
 
[40]  World Bank. (2003c). Participation in Monitoring and evaluation of PRSPs. A document review of trends and approaches emerging from 21 full PRSPs. Washington, D.C., The World Bank, Social Development Department, The participation and civic engagement group. The World Bank, Washington D.C.
 
[41]  Booth, David. (2005). Missing links in the politics of development: Learning from the PRSP experiment.
 
[42]  Smith, P. C., Nutley, S. M., Davies, H. T. O., Nutley, S. M., & Smith, P. C. (2009). What works?: evidence-based policy and practice in public services. In What works?: evidence-based policy and practice in public services.
 
[43]  DBSA. (2000). Selected Proceedings from a Seminar and workshop organised by the Development Bank of Southern Africa, the African Development Bank and the World Bank on Monitoring and Evaluation Capacity Development in Africa. Johannesburg, South Africa.: The World Bank, Washington D.C.
 
[44]  OECD/DAC. (2002). Glossary of Key Terms in Evaluation and Results Based Management. In Evaluation and Aid Effectiveness.
 
[45]  OECD/DAC. (1991). OECD/DAC Principles for the Evaluation of Development Assistance. Paris, OECD Publications.
 
[46]  Mackay, K. (2008). Helping countries build government monitoring and evaluation system. World Bank contribution to evidence-based policy making. In M. Segone (Ed.), Bridging the GapBridging the Gap. Switzerland: UNICEF: Switzerland: UNICEF.
 
[47]  Valadez, J., & Bamberger. M. (1994). Organizational and Management Issues in Programme Evaluation. In J. Valadez & M. Bamberger (Eds.), Monitoring and Evaluating Social Programmes in Developing Countries: A Handbook for Policymakers, Managers and Researchers (pp. 403-441). The World Bank, Washington D.C.: The World Bank, Washington D.C.
 
[48]  World Bank. (2007b). Results-Based National Development Strategies Assessment and Challenges Ahead. The World Bank, Washington D.C.
 
[49]  Clements, P., Chianca, T., & Sasaki, R. (2008). Reducing world poverty by improving evaluation of development aid. American Journal of Evaluation, 29(2), 195-214.
 
[50]  OECD/DAC. (2005). AID EFFECTIVENESS 2005-10: PROGRESS IN IMPLEMENTING THE PARIS DECLARATION. Paris.
 
[51]  OECD/DAC. (2008). 2006 Survey on Monitoring The Paris Declaration: Country Chapters - Zambia. Paris.
 
[52]  Talbot, C. (2010). Performance in Government: The evolving system of Performance and Evaluation Measurement, Monitoring and Management in the United Kingdom. World Bank, Washington, D.C.
 
[53]  Kanyamuna, V. (2013). Sector Monitoring and Evaluation Systems in the context of Poverty Reduction Strategies: A comparative case study of Zambia’s Health and Agriculture sectors. Masters Dissertation. University of Antwerp, Belgium.
 
[54]  Hwang, H. (2014). Building Monitoring and Evaluation Capacity in young systems: The experiences of Rwanda, Vietnam and Yemen. The World Bank, Washington, D.C.
 
[55]  Lucas, H., Evans, D., Pasteur, K., & Lloyd, R. (2004). Research on the current state of Poverty Reduction Strategy (PRS) Monitoring systems. IDS Discussion Paper 382, London.
 
[56]  Mackay, K. (1999). Evaluation Capacity Development: A Diagnostic Guide and Action Framework. The World Bank, Washington, D.C.
 
[57]  Brushett, S. (1998). Evaluation Capacity Development in Zimbabwe: Issues and Opportunities. OED, World Bank, Washington, D.C.
 
[58]  Burdescu, R., Villar, A. del, Mackay, K., Rojas, F., & Saavedra, J. (2005). Institutionalizing Monitoring and Evaluation Systems: Five experiences from Latin America. Prem Notes, World Bank, Washington, D.C.
 
[59]  Mubita, A., Mulonda, M., Libati, M., Mwale, N. & Kanyamuna (2017) Urban Informality and Small Scale Enterprise (SME) Development in Zambia: An Exploration of Theory and Practice, Journal of Behavioural Economics, Finance, Entrepreneurship, Accounting and Transport 5(1):19-29.
 
[60]  Goldman, I., Engela, R., Akhalwaya, I., Gasa, N., Leon, B., Mohammed, H., & Phillips, S. (2012). Establishing a National M&E System in South Africa, Prem Notes No. 21. World Bank, Washington, D.C.
 
[61]  GRZ. (2017b). Seventh National Development Plan (7NDP 2017-2021), Accelerating Development Efforts towards Vision 2030 without Leaving Anyone Behind (Vol. 1).
 
[62]  Mulonda, M., Kanyamuna, V., & Kanenga, H. (2018). State-Civil Society Relationship in Zambia. International Journal of Humanities, Art and Social Studies (IJHAS), 3(4), 17-26.
 
[63]  De Renzo, P. (2006). Aid, Budgets and Accountability: A Survey Article. Development Policy Review, 627-645.
 
[64]  World Bank. (2003d). “Toward Country-led Development: A Multi-Partner Evaluation of the Comprehensive Development Framework”, Synthesis Report. The World Bank, Washington D.C.
 
[65]  Morra Imas, L. G., & Rist, R. C. (2009). The Road to Results: Designing and Conducting Effective Development Evaluations. Washington, D.C., World Bank: Washington, D.C., World Bank.
 
[66]  Bedi, T., Coudouel, A., Cox, M., Goldstein, M., & Thornton, N. (2006). Beyond the numbers: Understanding the institutions for monitoring poverty reduction strategies.
 
[67]  Mehrotra, S. (2013). The Government Monitoring and Evaluation System in India: A Work in Progress.
 
[68]  Glady Lopez Acevedo, Katia Rivera, Lycia Rivera, L. L. & H. H. (2010). Challenges in Monitoring and Evaluation: An Opportunity To Institutionalize M&E Systems. Fifth Conference of the Latin America and the Caribbean Monitoring and Evaluation (M & E) Network, 204.
 
[69]  World Bank. (2004). Evaluation Capacity Development - Monitoring and Evaluation: Some Tools, Methods and Approaches. OED, World Bank, Washington D.C.: OED, World Bank, Washington D.C.
 
[70]  Mackay, Keith. (2006). Institutionalization of Monitoring and Evaluation Systems to Improve Public Sector Management. In ECD Working Paper Series. The World Bank, Washington, D.C.
 
[71]  Mackay, Keith. (2010). Conceptual Framework for Monitoring and Evaluation. The Nuts & Bolts of M&E Systems - The World Bank, 1, 8.
 
[72]  Kusek, J. Z., & Rist, R. C. (2004). Ten Steps to a Results-Based Monitoring and Evaluation Systems. A Handbook for Development Practitioners. Washington D.C., World Bank: Washington D.C., World Bank.
 
[73]  World Bank. (2012b). Improving the Quality of Public Expenditure through the use of Performance Information in Mexico. IEG, World Bank, Washington, D.C.
 
[74]  Mackay, K. (Ed.). (1998). Public Sector Performance - The Critical Role of Evaluation: Selected Proceedings from a World Bank Seminar. The World Bank, Washington D.C.: The World Bank, Washington D.C.
 
[75]  Segone, M. (Ed.). (2008). Bridging the gap: The role of Monitoring and Evaluation in evidence-based policy-making. UNICEF, Geneva.: UNICEF, Geneva.
 
[76]  World Bank. (2000). Monitoring and Evaluation Capacities in Ghana: A diagnosis and proposed action plan. The World Bank, Washington D.C.
 
[77]  Naidoo, I. (2010). Monitoring and Evaluation in South Africa. Many purposes, multiple systems. In M. Sergone (Ed.), From Policies to Results: Developing capacities for country monitoring and evaluation systems (pp. 303-320). New York: UNICEF: New York: UNICEF.
 
[78]  Fereday, J., & Muir-Cochrane, E. (2006). Demonstrating Rigor Using Thematic Analysis: A Hybrid Approach of Inductive and Deductive Coding and Theme Development. International Journal of Qualitative Methods, 5(1), 1-11.
 
[79]  Holvoet, N., & Rombouts, H. (2008). The challenge of monitoring and evaluation under the new aid modalities: Experiences from Rwanda. Journal of Modern African Studies, 46(4), 577-602.
 
[80]  Baum, W. C., & Tolbert, S. M. (1985). Investing in Development: Lessons of World Bank Experience. Washington D. C.: World Bank.
 
[81]  OECD, & World Bank. (n.d.). Source Book: Emerging good practice in Managing for Development Results - A unique opportunity to experience MfDRs in action (3rd ed.). OECD and World Bank.
 
[82]  Bemelemans-Videc, Marie-Louise, Rist, R. C., & Vedung, E. (Eds.). (2007). Carrots, Sticks and Sermons: Policy Instruments and their Evaluation. United States of America: Transaction Publishers.
 
[83]  Mayne, J., & Zapico-Goni, E. (2007). Monitoring Performance in the Public Sector: Future Directions from International Experience. USA: Transaction Publishers: USA: Transaction Publishers.
 
[84]  Patton, M. Q. (2003). Inquiry into Appreciative Evaluation. New Directions for Evaluation, 100, 85-98.
 
[85]  Carlsson, C., & Engel, P. (2002). Enhancing learning through evaluation: approaches, dilemmas and some possible ways forward. Background paper presented at the 2002 EES Conference. Seville, October 10-12, 2002. ECDPM.
 
[86]  Carrier, M., Bonnet-laverge, E., & Dixon, C. (2017). Project planning, monitoring and evaluation: Improving the quality, learning and accountability of Handicap International’s interventions. (January).
 
[87]  INTRAC. (2011). Monitoring and Evaluation: New Developments and Challenges, International Conference - 14-16 June 2011, Soesterberg, The Netherlands. Isham, Jonathan, Narayan, Deepa and Pritchett, Lant (1995) ‘Does Participation Improve Performance? Establishing Causalit. The World Bank Economic Review, 9(2), 175-200.
 
[88]  Tania, A., & Ronette, E. (2010). Implementing a government wide monitoring and evalution system in south africa.
 
[89]  Scriven, M. (2007). Key Evaluation Checklist. Evaluation Checklists Project. University of Michigan.
 
[90]  World Bank. (2012a). Designing a Results Framework for achieving Results: A How-To-Guide. IEG, Washington, DC.: IEG, Washington, DC.
 
[91]  UNDP. (2002). Handbook on Monitoring and Evaluating for Results. UNDP Evaluation Office, New York.: UNDP Evaluation Office, New York.
 
[92]  Kanyamuna, V., Munalula, M., & Mulele, C. S. (2019). Monitoring and Evaluation Legislation in Zambia - Gap Analysis. International Journal of Humanities, Art and Social Studies (IJHAS), 4(1), 15-25.
 
[93]  Liverani, A., & Lundgren, H. E. (2007). Evaluation Systems in Development Aid Agencies: An Analysis of DAC Peer Reviews 1996-2004. Evaluation, 13(2), 241-256.
 
[94]  AfCoP-MfDR. (2014). African Community of Practice on Managing for Development Results (AfCoP-MfDR): Putting results first in Zambia - Changing the lives of People - managing for development results country assessment.
 
[95]  Development Bank of Southern Africa. (2000). Selected Proceedings from a Seminar and workshop organised by the Development Bank of Southern Africa, the African Development Bank and the World Bank on Monitoring and Evaluation Capacity Development in Africa. Monitoring and Evaluation Capacity in Africa, (September), 25-29. Johannesburg, South Africa: African Development Bank and the World Bank.
 
[96]  GRZ. (2014a). National Planning and Budgeting Policy: Responsive, transparent, accountable and results-oriented Development Planning and Budgeting processes. Ministry of Finance and National Planning, Lusaka, Zambia.
 
[97]  Mackay, K., & Gariba, S. (2000). The role of Civil Society in assessing Public Sector Performance in Ghana: Proceedings of a workshop, ECD. The World Bank, Washington, D.C.: The World Bank, Washington, D.C.
 
[98]  OECD/DAC. (2007). Aid Effectiveness - 2006 Survey on Monitoring the Paris Declaration: Overview of the Results (Zambia Country Chapter). Paris, OECD Publications.
 
[99]  OECD/DAC. (2011a). Aid Effectiveness 2005-10: Progress in implementing the Paris Declaration. Paris, OECD Publications.
 
[100]  Pitchett, L., Samji, S., & Hammer, J. (2012). It’s All About M&E: Using Structured Experiential Learning (‘e’) to Crawl the Design Space. United Nations University.
 
[101]  Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and Quasi-experimental Designs for Generalized Causal Inference. Boston, MA and New York: Houghton Mifflin.: Boston, MA and New York: Houghton Mifflin.
 
[102]  Stame, N. (2004). Theory-Based Evaluation and Types of Complexity. Evaluation, 10(1), 58-76.
 
[103]  World Bank. (2003a). A User’s Guide to Poverty and Social Impact Analysis, Poverty Reduction Group. World Bank, Washington, D.C.: World Bank, Washington, D.C.
 
[104]  World Bank. (2003b). A user’s guide to poverty and social impact analysis: The World Bank, Poverty Reduction Group (PRMPR) and Social Development Department (SDV). The World Bank, Washington D.C.: The World Bank, Washington D.C.
 
[105]  World Bank. (2007a). “Results-Based National Development Strategies: Assessment and Challenges Ahead”, Review Paper. The World Bank, Washington D.C.
 
[106]  World Bank. (2011). Writing Terms of Reference for an Evaluation: A How-to-Guide. IEG, World Bank, Washington, D.C.: IEG, World Bank, Washington, D.C.
 
[107]  Zaltsman, A. (2006). Evaluation Capacity Development: Experience with Institutionalizing Monitoring and Evaluation Systems in Five Latin American Countries - Argentina, Chile, Colombia, Costa Rica and Uruguay. Washington, D.C.