Job Summary:
The Data Architect is responsible for designing, developing, and managing the organization’s data architecture. This role involves creating blueprints for how data will be stored, consumed, integrated, and managed across different data systems. The Data Architect will work closely with data engineers, data scientists, and other stakeholders to ensure that the data architecture aligns with the overall business objectives and supports the organization’s data needs.
Duties and Responsibilities:
Design and implement data architectures that support the organization’s business objectives, ensuring scalability, reliability, and security.
Develop and maintain comprehensive data models, including logical, physical, and conceptual models, to meet business requirements.
Define data architecture standards, guidelines, and best practices for managing data across the organization.
Collaborate with data engineers, data scientists, and other stakeholders to design and implement data pipelines, ensuring efficient data flow and integration across systems.
Ensure data quality and governance by implementing data validation, cleansing, and transformation processes.
Oversee the development and maintenance of the data warehouse, data lakes, and other data storage solutions.
Evaluate and recommend new technologies, tools, and practices to improve the organization’s data architecture and data management processes.
Provide technical leadership and guidance on data architecture and data management best practices to the data engineering team.
Monitor and optimize the performance of data systems, ensuring they meet business requirements and SLAs.
Ensure compliance with data privacy regulations and data security standards across all data systems.
Job Requirements
Technical Skills:
Strong experience in data modelling, data warehousing, and database design.
Proficiency in SQL and experience with database management systems such as My SQL, Postgre SQL, SQL Server, Oracle, etc.
Experience with big data technologies such as Hadoop, Spark, Kafka, etc.
Hands-on experience with cloud data platforms (e.g., AWS, Azure, GCP) and related services.
Knowledge of ETL/ELT processes and tools.
Familiarity with data governance and data quality frameworks.
Experience with data integration tools and techniques, including APIs, microservices, and batch processing.