Abbott is a global healthcare leader that helps people live more fully at all stages of life. Our portfolio of life-changing technologies spans the spectrum of healthcare, with leading businesses and products in diagnostics, medical devices, nutritionals, and branded generic medicines. Our 109,000 colleagues serve people in more than 160 countries.
Our Neuromodulation business is an area of expertise for Abbott. This business includes implantable devices compatible with mobile technology to help people who suffer from chronic pain and movement disorders. Our Solutions include Proclaim (TM) XR SCS System, the #1 Spinal cord stimulator on the market, Proclaim (TM) DRG Neurostimulator, the only FDA approved DRG therapy and a market leader in radiofrequency ablation therapy, Abbott RFA. These non-opioid therapies allow us to provide interventional pain therapy to patients throughout the pain continuum. Our deep brain stimulation technology for progressive diseases help people manage their Parkinson’s disease and essential tremor symptoms, steering away from side effects.
Our location in Plano, TX currently has an opportunity for a Senior Data Engineer. The Data Engineer is responsible for creating data pipelines and databases for several projects and works closely with full stack developers and data scientists.
WHAT YOU’LL DO
- Provide subject matter expertise and hands on delivery of data capture, curation and consumption pipelines on Azure.
- Build Azure data solutions and provide technical perspective on storage, big data platform services, serverless architectures, Hadoop ecosystem, vendor products, RDBMS, DW/DM, NoSQL databases and security.
- Participate in deep architectural discussions to build confidence and ensure team success when building new solutions and migrating existing data applications on the Azure platform.
- Conduct full technical discovery, identifying pain points, business and technical requirements, “as is” and “to be” scenarios.
- Understand the strategic direction set by senior management as it relates to team goals.
- Use considerable judgment to define solution and seeks guidance on complex problems.
EDUCATION AND EXPERIENCE YOU’LL BRING
- Bachelor’s degree in Computer Science or related technical field.
- Minimum of 5 years of hands-on experience in Azure and Big Data technologies such as Powershell, C#, Java, Node.js, Python, SQL, ADLS/Blob, Spark/SparkSQL, Databricks, Hive and streaming technologies such as Kafka, EventHub, NiFI etc.
- Minimum of 5 years of RDBMS experience.
- Experience in using Big Data File Formats and compression techniques. Experience working with Developer tools such as Azure DevOps, Bitbucket, Jira.
- Extensive hands-on experience implementing data migration and data processing using Azure services: ADLS, Azure Data Factory, Azure Functions, Synapse/DW, Azure SQL DB, Event Hub, IOT Hub, Azure Stream Analytics, Azure Analysis Service, HDInsight, Databricks Azure Data Catalog, Cosmo Db, ML Studio, AI/ML, etc.
- Experience with cloud migration methodologies and processes including tools like Azure Data Factory.