Enterprise Analytics -- Data Scientist
Enterprise Analytics -- Data Scientist Job Description: As an enterprise analytics data scientist, you will be fascinated with data, algorithms, and analytics. You will obtain, manage, showcase, and evangelize actionable data science solutions. Besides building good responsive full-stack solutions, you will innovate, propose and action advanced opportunities towards our Everything-as-a-Service strategy. You will have a proven track record of implementing scalable big data analytics, predictive insights, and managing tier 1 dataops to qualify for this role. You will be a "doer": one who is not afraid to deep dive into technology, code, and dashboards to meet tight commitments. Key Responsibilities: * Design, architect, and engineer big data solutions * Explore, experiment, and implement advanced algorithms for multiple business functions and domains. * Develop a modern in-situ data analytics lakehouse * Instrument monitoring, self-healing, and self-describing data pipelines to meet tier 1 OLAs/OpX: i.e., own operational and innovation outcomes. * Implement continuous integration, continuous deployment, devops practice Additional Responsibilities * Demonstrate a growth mindset: collaborate with customer and action joint-outcomes with passion and penchant for success * Standardize devops, dataops, mlops process, technology, and architecture: propose and implement tailored solutions quickly on-demand * Regularly collaborate with BRMs, business stakeholders to define requirements, technology, architecture, and deliverables * Showcase ML, DL, AI solutions: fostering mentor-mentee trust/relationship with business partners * Publish and share work with others: contribute to IP, publication portfolio Desired Skills * Big Data (Hadoop, Spark, Python, Hive) * Pipeline Tools (ETL, DBT, Kafka, Flume, Nifi, Streamsets, Beam, Camel, Pandas) * BI Tools (Power BI, Qlik, Tableau, Dash) * Fullstack (React, Flutter, Node, Java, Angular) * AI (Jupyter, Spark, H2O, Alteryx) * Data Science (Python/R, Stats, Machine Learning, Deep Learning) * Design for Deployment (Docker, Kubernetes, Airflow) * DevOps (JIRA, Confluence, Github, Jenkins) Requirements * 7+ years of data management and data science experience * Hands-on data warehousing, research, and analytics experience * MS in Computer Science or Information Systems * 5+ years of experience with big data * 3+ years of SDLC/PDLC responsibilities * 2+ years of customer facing solutions deployment/support experience * Good communication skills; articulate concepts with clarity and confidence * Experience working with an agile methodology (specifically, Scrum & SAFE framework) * Ambition and ability to spearhead tasks at the speed of enterprise * Demonstrate a growth mindset; share and welcome constructive input with others: embrace "we-win-together" culture #GlobalITIN Job: Information Technology Job Level: Expert Hewlett Packard Enterprise is EEO F/M/Protected Veteran/ Individual with Disabilities. HPE will comply with all applicable laws related to the use of arrest and conviction records, including the San Francisco Fair Chance Ordinance and similar laws and will consider for employment qualified applicants with criminal histories.
June 10 on Equest