About El Corte Ingls
- Play a critical role as a member of the leadership team in shaping and supporting our overall company vision, day-to-day operations, and culture.
- Set the technical vision and build the technical product roadmap from launch to scale; including defining long-term goals and strategies
- Define best practices around coding methodologies, software development, and quality assurance
- Define innovative technical requirements and systems while balancing time, feasibility, cost and customer experience
- Build and support production products
- Ensure our internal processes and services comply with privacy and security regulations
- Establish a high performing, inclusive engineering culture focused on innovation, execution, growth and development
- Set a high bar for our overall engineering practices in support of our mission and goals
- Develop goals, roadmaps and delivery dates to help us scale quickly and sustainably
- Collaborate closely with Product, Business, Marketing and Data Science
- Experience with financial and transactional systems
- Experience engineering for large volumes of data at scale
- Experience with financial audit and compliance is a plus
- Experience building a successful consumer facing web and mobile apps at scale
- Use statistical methods to analyze data and generate useful business reports and insights
- Analyze Publisher and Demand side data and provide actionable insights to improve monetisation to operations team and implement the strategies
- Provide support for ad hoc data requests from the Operations teams and Management
- Use 3rd party API's, web scraping, csv report processing to build dashboards in Google Data Studio
- Provide support for Analytics Processes monitoring and troubleshooting
- Support in creating reports, dashboards and models
- Independently determine the appropriate approach for new assignments
- Inquisitive and having great problem-solving skills
- Ability to own projects and work independently once given a direction
- Experience working directly with business users to build reports, dashboards, models and solving business questions with data
- Tools Expertise - Relational Databases -SQL is a must along with Python
- Familiarity with AWS Athena, Redshift a plus
- 2-7 years
- UG - B.Tech/B.E.; PG - M.Tech/ MSc, Computer Science, Statistics, Maths, Data Science/ Data Analytics
We are looking for a Machine Learning engineer for on of our premium client.
Experience: 2-9 years
Python, PySpark, the Python Scientific Stack; MLFlow, Grafana, Prometheus for machine learning pipeline management and monitoring; SQL, Airflow, Databricks, our own open-source data pipelining framework called Kedro, Dask/RAPIDS; Django, GraphQL and ReactJS for horizontal product development; container technologies such as Docker and Kubernetes, CircleCI/Jenkins for CI/CD, cloud solutions such as AWS, GCP, and Azure as well as Terraform and Cloudformation for deployment
Job Description - Sr Azure Data Engineer
Roles & Responsibilities:
- Hands-on programming in C# / .Net,
- Develop serverless applications using Azure Function Apps.
- Writing complex SQL Queries, Stored procedures, and Views.
- Creating Data processing pipeline(s).
- Develop / Manage large-scale Data Warehousing and Data processing solutions.
- Provide clean, usable data and recommend data efficiency, quality, and data integrity.
- Should have working experience on C# /.Net.
- Proficient with writing SQL queries, Stored Procedures, and Views
- Should have worked on Azure Cloud Stack.
- Should have working experience ofin developing serverless code.
- Must have MANDATORILY worked on Azure Data Factory.
- 4+ years of relevant experience
We are looking for an outstanding ML Architect (Deployments) with expertise in deploying Machine Learning solutions/models into production and scaling them to serve millions of customers. A candidate with an adaptable and productive working style which fits in a fast-moving environment.
- 5+ years deploying Machine Learning pipelines in large enterprise production systems.
- Experience developing end to end ML solutions from business hypothesis to deployment / understanding the entirety of the ML development life cycle.
- Expert in modern software development practices; solid experience using source control management (CI/CD).
- Proficient in designing relevant architecture / microservices to fulfil application integration, model monitoring, training / re-training, model management, model deployment, model experimentation/development, alert mechanisms.
- Experience with public cloud platforms (Azure, AWS, GCP).
- Serverless services like lambda, azure functions, and/or cloud functions.
- Orchestration services like data factory, data pipeline, and/or data flow.
- Data science workbench/managed services like azure machine learning, sagemaker, and/or AI platform.
- Data warehouse services like snowflake, redshift, bigquery, azure sql dw, AWS Redshift.
- Distributed computing services like Pyspark, EMR, Databricks.
- Data storage services like cloud storage, S3, blob, S3 Glacier.
- Data visualization tools like Power BI, Tableau, Quicksight, and/or Qlik.
- Proven experience serving up predictive algorithms and analytics through batch and real-time APIs.
- Solid working experience with software engineers, data scientists, product owners, business analysts, project managers, and business stakeholders to design the holistic solution.
- Strong technical acumen around automated testing.
- Extensive background in statistical analysis and modeling (distributions, hypothesis testing, probability theory, etc.)
- Strong hands-on experience with statistical packages and ML libraries (e.g., Python scikit learn, Spark MLlib, etc.)
- Experience in effective data exploration and visualization (e.g., Excel, Power BI, Tableau, Qlik, etc.)
- Experience in developing and debugging in one or more of the languages Java, Python.
- Ability to work in cross functional teams.
- Apply Machine Learning techniques in production including, but not limited to, neuralnets, regression, decision trees, random forests, ensembles, SVM, Bayesian models, K-Means, etc.
Roles and Responsibilities:
Deploying ML models into production, and scaling them to serve millions of customers.
Technical solutioning skills with deep understanding of technical API integrations, AI / Data Science, BigData and public cloud architectures / deployments in a SaaS environment.
Strong stakeholder relationship management skills - able to influence and manage the expectations of senior executives.
Strong networking skills with the ability to build and maintain strong relationships with both business, operations and technology teams internally and externally.
Provide software design and programming support to projects.
Qualifications & Experience:
Engineering and post graduate candidates, preferably in Computer Science, from premier institutions with proven work experience as a Machine Learning Architect (Deployments) or a similar role for 5-7 years.
- Experience with relational SQL & NoSQL databases including MySQL & MongoDB.
- Familiar with the basic principles of distributed computing and data modeling.
- Experience with distributed data pipeline frameworks like Celery, Apache Airflow, etc.
- Experience with NLP and NER models is a bonus.
- Experience building reusable code and libraries for future use.
- Experience building REST APIs.
Preference for candidates working in tech product companies
upGrad is an online education platform building the careers of tomorrow by offering the most industry-relevant programs in an immersive learning experience. Our mission is to create a new digital-first learning experience to deliver tangible career impact to individuals at scale. upGrad currently offers programs in Data Science, Machine Learning, Product Management, Digital Marketing, and Entrepreneurship, etc. upGrad is looking for people passionate about management and education to help design learning programs for working professionals to stay sharp and stay relevant and help build the careers of tomorrow.
Position: Data Scientist
Long Term Contract position
Remote Till Covid
Experience in applied data science, analytics, data storytelling.
- Write well documented code that can be shared and used across teams, and can scale to be used in existing products. SQL, Advanced Python or R (descriptive / predictive models), Tableau Visualization. Working knowledge of Hadoop, BigQuery, Presto, Vertica
- Apply your expertise in quantitative analysis, data mining, and the presentation of data to uncover unique actionable insights about customer service, health of public conversation and social media
- Inform, influence, support, and execute analysis that feeds into one of our many analytics domains - Customer analytics, product analytics, business operation analytics, cost analytics, media analytics, people analytics
- Select and deselect analytics priorities, insights and data based on ability to drive our desired outcomes
- Own the end to end process, from initiation to deployment, and through ongoing communication and collaboration, sharing of results to partners and leadership
- Mentor and create sense of community and learning environments for our global team of data analysts
- Ability to communicate findings clearly to both technical and non-technical audiences and to effectively collaborate within cross-functional teams
- Working knowledge of agile framework and processes.
- You should be comfortable managing work plans, timelines and milestones
- You have a sense of urgency, move quickly and ship things
- You're experienced in metrics and experiment-driven development
- Experience in statistical methodology (multivariate, time-series, experimental design, data mining, etc.)
As a Data Warehouse Engineer in our team, you should have a proven ability to deliver high-quality work on time and with minimal supervision.
Develops or modifies procedures to solve complex database design problems, including performance, scalability, security and integration issues for various clients (on-site and off-site).
Design, develop, test, and support the data warehouse solution.
Adapt best practices and industry standards, ensuring top quality deliverable''s and playing an integral role in cross-functional system integration.
Design and implement formal data warehouse testing strategies and plans including unit testing, functional testing, integration testing, performance testing, and validation testing.
Evaluate all existing hardware's and software's according to required standards and ability to configure the hardware clusters as per the scale of data.
Data integration using enterprise development tool-sets (e.g. ETL, MDM, Quality, CDC, Data Masking, Quality).
Maintain and develop all logical and physical data models for enterprise data warehouse (EDW).
Contributes to the long-term vision of the enterprise data warehouse (EDW) by delivering Agile solutions.
Interact with end users/clients and translate business language into technical requirements.
Acts independently to expose and resolve problems.
Participate in data warehouse health monitoring and performance optimizations as well as quality documentation.
Job Requirements :
2+ years experience working in software development & data warehouse development for enterprise analytics.
2+ years of working with Python with major experience in Red-shift as a must and exposure to other warehousing tools.
Deep expertise in data warehousing, dimensional modeling and the ability to bring best practices with regard to data management, ETL, API integrations, and data governance.
Experience working with data retrieval and manipulation tools for various data sources like Relational (MySQL, PostgreSQL, Oracle), Cloud-based storage.
Experience with analytic and reporting tools (Tableau, Power BI, SSRS, SSAS). Experience in AWS cloud stack (S3, Glue, Red-shift, Lake Formation).
Experience in various DevOps practices helping the client to deploy and scale the systems as per requirement.
Strong verbal and written communication skills with other developers and business clients.
Knowledge of Logistics and/or Transportation Domain is a plus.
Ability to handle/ingest very huge data sets (both real-time data and batched data) in an efficient manner.