Python Engineer

Share
Report
Thermo Fisher Scientific
116 views·
Aug 07, 2023

How will you make an impact?

Being part of an organization that provides analytics driven data solutions for all businesses across Thermo Fisher Scientific, you will be instrumental in helping our business partners and customers with their data and analytics needs.

What will you do?

  • Own and deliver enhancements associated with Data platform solutions.
  • Maintains and Enhances scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity.
  • Enhance/Support solutions using Pyspark/EMR, SQL and databases, AWS Athena, S3, Redshift, AWS API Gateway, Lambda, Glue and other Data Engineering technologies.
  • Write Complex Queries and edit them as required for implementing ETL/Data solutions.
  • Implement solutions using AWS and other cloud platform tools, including GitHub, Jenkins, Terraform, Jira, and Confluence.
  • Follow agile development methodologies to deliver solutions and product features by following DevOps, Data Ops and Dev Sec Ops practices.
  • Propose Data load optimizations and continuously implement to improve the performance of the Data loads.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Be available and participate in on-call schedule to address critical operational incidents and business requests.

How will you get here?

  • Bachelor’s degree in Computer Science or related field with at least 1 years of Data Engineering experience using AWS services, Pyspark/EMR
  • Full life cycle project implementation experience in AWS using Pyspark/EMR, Athena, S3, Redshift, AWS API Gateway, Lambda, Glue and other managed services is preferred
  • Hands on experience in using S3, AWS Glue jobs, Lambda and API Gateway
  • Working SQL experience to troubleshoot SQL code. Redshift knowledge is an added advantage
  • Experience using Jira for task prioritization and Confluence and other tools for documentation
  • Strong analytical experience with database in writing complex queries, query optimization, debugging, user defined functions, views, indexes etc.
  • Experience with source control systems such as Git, Bitbucket, and Jenkins build and continuous integration tools
  • Exposure to Kafka, Redshift, Sage Maker would be added advantage Exposure to data visualization tools like Power BI, Tableau etc
  • Functional Knowledge in the areas of Sales & Distribution, Material Management, Finance and Production Planning is preferred

Knowledge, Skills, Abilities

  • Excellent written, verbal and inter-personal and stakeholder communication skills
  • Ability to analyze trends associated with huge datasets
  • Ability to work with cross functional teams from multiple regions/ time zones by effectively leveraging multi-form communication (Email, MS Teams for voice and chat, meetings)
  • Excellent prioritization and problem-solving skills
  • Action Oriented: Have a sense of urgency, high energy and enthusiasm in managing Systems and Platforms
  • Drives Results: Consistently achieving results, even under tough circumstances
  • Global Perspective: Takes a broad view when approaching issues; using a global lens
  • Learn and train other team members
  • Drive to meet and exceed BI Operational SLAs for Service Now incidents, Major Incidents, xMatters alerts, Employee Experience Metrics and BI application /process availability metrics
1 - 3 years
Negotiation
October 29, 2021
Full Time
Budapest, Central Hungary
On-site