This study sought to establish the factors that affect data quality in Private hospitals in Lagos State. The objectives of the study were: to examine the effect of internal factors on data quality in Private hospitals; to find out how external factors affect data quality in Private hospitals and to find out how data quality can be improved in Private hospitals. Specific emphasis was put on the effect of internal factors as well as external factors on data quality in Private hospitals in Lagos State. The study used a case study research design on a population which entailed the staff of respective clinics composed of administrators, in-charge and data entry staffs, among others. A total of 111 respondents were selected for the study. The researcher used both random and non-random sampling techniques in selecting the samples The study was guided by a quantitative paradigm, but with substantial complementary qualitative methods. Questionnaires were self-administered which provided sufficient data from the sample selected, and interviews were used in order to get detailed data to complement and triangulate data which was collected using questionnaires. Data from the questionnaires was analyzed quantitatively using Statistical Package for Social Scientists (SPSS) where correlation was used to establish the relationship between the factors and data quality. Data from questionnaires was presented in form of frequency tables and bar graphs. The study findings confirmed that internal and external factors negatively affect data quality in Private hospitals. The study recommended that private hospitals should purposely invest in data departments that can be in charge of the monitoring and evaluation function, conducting of formal trainings for all staff in data management and joint supervision in quality assurance and improvement (QA/QI) to promote sustainability in private hospitals.

1.1 Background of the study
Before the rise of the inexpensive server, massive mainframe computers were used to maintain name and address data so that mail could be properly routed to its destination. The mainframes used business rules to correct common misspellings and typographical errors in name and address data, as well as to track customers who had moved, died, gone to prison, married, divorced, or experienced other life-changing events (Olson, 2003). Government agencies began to make postal data available to a few service companies to cross-reference customer data with the National Change of Address (NCOA) registry. This technology saved large companies millions of dollars in comparison to manual correction of customer data. Large companies saved on postage, as bills and direct marketing materials made their way to the intended customer more accurately. Initially sold as a service, data quality moved inside the walls of corporations, as low- cost and powerful server technology became available (Olson, 2003).

In the 1960s, Zero Defects (or ZD) was a data management-led programme to eliminate defects in industrial production data that enjoyed brief popularity in American industry from 1964 to the early 1970s (Halpin, 1966). Quality expert Philip Crosby later incorporated it into his "Absolutes of Quality Management" and it enjoyed a renaissance in the American automobile industry—as a performance goal more than as a programme in the 1990s. Although applicable to any type of enterprise, it has been primarily adopted within supply chains wherever large volumes of components are being purchased (common items such as nuts and bolts are good examples).

In the 1990s, most of the companies all over the world began to set up data governance teams whose sole role in the corporation was to be responsible for data quality. In some organizations, this data governance function was established as part of a larger Regulatory Compliance function - a recognition of the importance of Data/Information Quality to organizations because problems with data quality do not only arise from incorrect data; inconsistent data is a problem as well. This has necessitated the elimination of data shadow systems; and centralization of data in a warehouse is one of the initiatives a company can take to ensure data consistency (Olson, 2003).

By the start of the year 2000, enterprises, scientists, and researchers had started to participate within data curation communities to improve the quality of their common data. The market was going some way to providing data quality assurance. A number of vendors made tools for analyzing and repairing poor quality data in situ, service providers cleaned the data on a contract basis and consultants advised on fixing processes or systems to avoid data quality problems in the first place (Redman, 2004). Most data quality tools offer a series of tools for improving data, which may include data profiling, data standardization, geocoding, matching or linking, monitoring -- keeping track of data quality over time and reporting variations in the quality of data as well as the batch and real time because once the data is initially cleansed, companies often want to build the processes into enterprise applications to keep it clean (Redman, 2004).

This, thereafter, necessitated the formation of the International Association for Information and Data Quality (IAIDQ) which was established in 2004 to provide a focal point for professionals and researchers in the field of data quality. This was also coupled with the introduction and certification of ISO 8000 which is the international standard for data quality in the whole world (Olson, 2003).

Globally, reliable and accurate public health information is essential for monitoring health and for evaluating and improving the delivery of health-care services and programmes (AbouZahr, 2005). As countries report their progress towards achieving the United Nations Millennium Development Goals, the need for high-quality data has been neglected. Furthermore, funding and support for public health activities, such as immunization programmes, remain contingent on demonstrating coverage using routine statistics (Doyle, 2009). However, assuring the quality of health information systems remains a challenge.

In Africa, studies of public health information systems frequently document problems with data quality, such as incomplete records and untimely reporting (Makombe, 2008). Yet these systems are often the only data sources available for the continuous, routine monitoring of health programmes. Efforts have been made to improve the quality and management of public health information systems in developing countries. Two examples are the Health Metrics Network, an international network that seeks to improve the quality of health information from various sources, and the Performance of Routine Information System Management (PRISM) framework, which was developed as a method for assessing the strengths and weaknesses of routine health information systems (Hotchkiss, 2010). Other initiatives, such as the Data Quality Audit, have been used by the GAVI Alliance to improve the monitoring of immunization coverage (Doyle, 2010). However, the complex nature of health information systems and the demands placed upon them have complicated efforts to improve the quality of routine data (Barron, 2010).

Studies done in Kenya on Prevention of Mother to Child Transmission (PMTCT) programme showed that one unexpected complication that arose during the study could have reduced the effect of the data improvement intervention. The PMTCT programme in Kenya is relatively dynamic and the names and definitions of the data elements used for monitoring are frequently changed (Kimaro, 2005). Several challenging changes occurred during the study. For example, the data element used in the State Health Information System (DHIS) to record whether a baby had undergone a polymerase chain reaction test for HIV at 6 weeks was initially titled, “HIV 1st test of baby born to HIV-positive woman” but was later changed to “HIV PCR test of baby born to HIV-positive woman at 6 weeks or later”. Such changes were made without the state offices providing definitions to the clinics. This could have caused considerable confusion at individual facilities and compromised the quality of reporting on that particular data element (Kimaro, 2005).

Despite these limitations, the improvement in PMTCT data quality observed in this study is encouraging, for it suggests that similar approaches could improve the quality of the data needed for decision-making and resource allocation in other public health programmes (Kimaro, 2005). The rationalization of data collection tools, clear definitions of data elements, continuous feedback on data quality and intermittent but regular data audits are effective ways of improving data quality. However, while this study shows that public health information can be improved, the final result falls short of what we should accept from our health information systems.

In hospitals in Nigeria, health care data collected provide government authorities like the Ministry of Health with information required to not only review the services of all hospitals under their control, but also to plan for the future. In addition, the use of a disease classification system at primary health care level enables the government to collect data on the health status of the community and provide detailed national health statistics. In some countries, the ministry of health determines whether hospitals are required to supply information only on the main conditions or on all diagnoses treated and procedures performed (Kwesiga, 2001).

For most private hospitals in Nigeria, many clinicians assume that the data contained and portrayed in their health systems is absolute and error free, or that the errors are not important. But error and uncertainty are inherent in all data, and all errors affect the final uses that the data may be put to. Clinics and most health units do not take time to examine the information quality chain responsible for species-occurrence data and their documentation is not consistent to data management principles, thus making it hard for them to be able to know and understand the data and determine their “fitness for use” (Kwesiga, 2001).

Most clinics rush to submit forged data sets upon request and this normally contains acute problems traceable right from entry to conversion. In addition to forging data sets, most of the clinics avail raw data in form of health reports which are sometimes written in ink and these data sets are very hard to integrate in case they are needed to provide some meaningful information on health issues in such clinics or health centres. Hence, in addition to threatening patient safety, poor data quality increases healthcare costs and inhibits health information exchange, research, and performance measurement initiatives (Ministry of Health Report, 2006).

Worse still, some of the clinics have a tendency of waiting for the time periods when this information is needed and normally, compilation of data sets begin one or two months towards the dates when they know that officials from UHMG or Ministry of Health will come in collecting this data. This implies that such data sets have loopholes given that they have not fully represented the time period in which they are supposed to be compiled. This therefore leaves a lot to be desired, given the fact that the data sets are urgently needed to address public health concerns in certain regions.

1.2 Problem statement
Healthcare data and its transformation into meaningful information is a central concern for consumers, healthcare providers, and the government. Standards, technologies, education, and research are required to capture, use, and maintain accurate healthcare data and facilitate the transition from paper to electronic systems in order to effectively formulate policies regarding health, especially in the public domain (Wang and Storey, 1996). It is on this note that UHMG supports private hospitals through training, mentoring and provision of data gathering tools so that they can collect, analyze, and report to the Ministry of Health through the State Information System and then to UHMG. Despite all these efforts, data from these clinics is usually inaccurate, late, incomplete and even getting these reports is a struggle. This data therefore makes it hard for the stakeholders to use it to make informed decisions so as to improve programme performance (UHMG Data Quality Assessment Report, 2015)

The above statement therefore depicts that the essentials of data management, especially the clinical coding procedure, are often neglected issues in health clinics databases and very often, health-related data are used uncritically without consideration of the errors they contain within, which can lead to erroneous results, misleading information, unwise decisions and increased costs. The study therefore intended to establish the different factors that affect the data quality in the private health sector.

1.3 Purpose of the study
The purpose of the study was to establish the effect of inadequate data collection in private hospitals in Nigeria with special emphasis on Private hospitals in Lagos State.

1.4 Specific objectives
This study was guided by the following objectives:

1. To examine the effect of internal factors on Data Quality in Private hospitals;

2. To find out how external factors affect Data Quality in Private hospitals.

1.5 Research questions
This study sought to answer the following questions:

1. How do internal factors affect Data Quality in Private hospitals?

2. What is the effect of external factors on Data Quality in Private hospitals?

For more Physics & Health Education Projects Click here
Item Type: Project Material  |  Size: 65 pages  |  Chapters: 1-5
Format: MS Word  |  Delivery: Within 30Mins.


No comments:

Post a Comment

Note: Only a member of this blog may post a comment.

Search for your topic here

See full list of Project Topics under your Department Here!

Featured Post


A hypothesis is a description of a pattern in nature or an explanation about some real-world phenomenon that can be tested through observ...

Popular Posts