Senior Data Infrastructure Engineer

$100000 - $200000 / year
Apply Now

Job Description

Our client is seeking a Senior Data Infrastructure Engineer to support their Federal client’s Data Program and develop tools and infrastructure for data processing use cases.

Responsibilities
• Application development using programming languages such as Java, Python, Rust and Scala

• Complete software development lifecycle (SDLC) including writing requirements, implementation, testing, technical documentation and deployment of software applications.

• Assemble data storage and processing infrastructure that allows working nimbly through extensive amounts of analytics data.

• Design and build robust and scalable solutions for managing structured and unstructured data.

• Database instrumentation and automation

• Troubleshoot and modify database internals

• Develop and documenting service Application Program Interface (API)’s for the collection, cleansing, and storage of data.

• Work with data scientists, database architects, data analysts, business users and business managers to identify and develop data processing use cases.

• Work with data architects, business users and business managers to identify and develop data infrastructure, including data layers, API’s, applications, and processing pipelines.

• Orchestrate server environments on the data platform with tools such as Puppet and Ansible.

• Develop interfaces and applications for accessing and visualizing data.

• Develop line of business applications using Salesforce and SharePoint platforms for collecting and presenting data.

• Work with business users and the data operations group to develop automated ETL routines to ingest disparate sources of data into SQL databases.

• Work with other data architects to develop databases and data models for PostgreSQL, SQL Server using tools such as SQL PowerArchitect.

• Document and maintain systems designs and operations manuals.

• Complete software development lifecycle (SDLC) including writing requirements, implementation, testing, technical documentation and deployment of software applications.

• Assemble data storage, processing, and integration infrastructure.

• Provide automation of data management processes.

• Tool development to support a team of Data Architects, Data Analysts and Data Scientists.

• Microsoft Access frontend development.

Qualifications
5+ years’ experience in the following areas:

· Experience with infrastructure automation tools, such as Puppet or Ansible

· Experience with Apache or other webserver configuration

· Open-source product deployment experience

· Linux administration, automation, and shell scripting experience

In addition, we are looking for the following skills and competencies:

· Comfortable working with one of Python, Ruby, Java or C

· Excellent communication skills and proven ability to work with customers, senior management and other technical teams

· Excellent documentation skills and the ability to recommend best practices and articulate process improvements and required changes

· Exceptionally organized, with the ability to meet production deadlines, and self-direct when necessary

· Hadoop (and related technology) implementation

· Knowledge of Massively Parallel Processing databases like Greenplum or Redshift

· Experience in working with massive amounts of data in a high availability environment

· Experience in Database optimization, performance tuning, health monitoring, administration etc.

Related Jobs