We are looking for an outstanding ML Architect (Deployments) with expertise in deploying Machine Learning solutions/models into production and scaling them to serve millions of customers. A candidate with an adaptable and productive working style which fits in a fast-moving environment.
Skills:
- 5+ years deploying Machine Learning pipelines in large enterprise production systems.
- Experience developing end to end ML solutions from business hypothesis to deployment / understanding the entirety of the ML development life cycle.
- Expert in modern software development practices; solid experience using source control management (CI/CD).
- Proficient in designing relevant architecture / microservices to fulfil application integration, model monitoring, training / re-training, model management, model deployment, model experimentation/development, alert mechanisms.
- Experience with public cloud platforms (Azure, AWS, GCP).
- Serverless services like lambda, azure functions, and/or cloud functions.
- Orchestration services like data factory, data pipeline, and/or data flow.
- Data science workbench/managed services like azure machine learning, sagemaker, and/or AI platform.
- Data warehouse services like snowflake, redshift, bigquery, azure sql dw, AWS Redshift.
- Distributed computing services like Pyspark, EMR, Databricks.
- Data storage services like cloud storage, S3, blob, S3 Glacier.
- Data visualization tools like Power BI, Tableau, Quicksight, and/or Qlik.
- Proven experience serving up predictive algorithms and analytics through batch and real-time APIs.
- Solid working experience with software engineers, data scientists, product owners, business analysts, project managers, and business stakeholders to design the holistic solution.
- Strong technical acumen around automated testing.
- Extensive background in statistical analysis and modeling (distributions, hypothesis testing, probability theory, etc.)
- Strong hands-on experience with statistical packages and ML libraries (e.g., Python scikit learn, Spark MLlib, etc.)
- Experience in effective data exploration and visualization (e.g., Excel, Power BI, Tableau, Qlik, etc.)
- Experience in developing and debugging in one or more of the languages Java, Python.
- Ability to work in cross functional teams.
- Apply Machine Learning techniques in production including, but not limited to, neuralnets, regression, decision trees, random forests, ensembles, SVM, Bayesian models, K-Means, etc.
Roles and Responsibilities:
Deploying ML models into production, and scaling them to serve millions of customers.
Technical solutioning skills with deep understanding of technical API integrations, AI / Data Science, BigData and public cloud architectures / deployments in a SaaS environment.
Strong stakeholder relationship management skills - able to influence and manage the expectations of senior executives.
Strong networking skills with the ability to build and maintain strong relationships with both business, operations and technology teams internally and externally.
Provide software design and programming support to projects.
Qualifications & Experience:
Engineering and post graduate candidates, preferably in Computer Science, from premier institutions with proven work experience as a Machine Learning Architect (Deployments) or a similar role for 5-7 years.
About Netmeds.com
Netmeds.com is a digital pharma platform that delivers authorized prescription and over-the-counter (OTC) medicine, in addition to other health-related products. With over 5.7 million customers in over 670 cities and towns throughout India, Netmeds aims to provide a solution for the speedy online purchase and quick delivery of verified prescriptions across India. Customers have a straightforward method for shopping for health products online through the website. To guarantee that only the appropriate medication is delivered, a group of licensed pharmacists examines the prescription to verify its legitimacy and determine the appropriate dosage. Users get access to over 70,000 prescription medications for chronic and recurring disorders via Netmeds. Additionally, customers have access to better lifestyle drugs and thousands of non-prescription goods through Netmeds.
A business located in Chennai called Dadha Pharma is promoting Netmeds.com. The Dadha family has a long history of working in the pharmaceutical sector. In 1914, they opened their first store selling pharmaceutical products, and in 1972, they branched out into the production of medications.
Similar jobs
Graas uses predictive AI to turbo-charge growth for eCommerce businesses. We are “Growth-as-a-Service”. Graas is a technology solution provider using predictive AI to turbo-charge growth for eCommerce businesses. Graas integrates traditional data silos and applies a machine-learning AI engine, acting as an in-house data scientist to predict trends and give real-time insights and actionable recommendations for brands. The platform can also turn insights into action by seamlessly executing these recommendations across marketplace store fronts, brand.coms, social and conversational commerce, performance marketing, inventory management, warehousing, and last mile logistics - all of which impacts a brand’s bottom line, driving profitable growth.
Roles & Responsibilities:
Work on implementation of real-time and batch data pipelines for disparate data sources.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS technologies.
- Build and maintain an analytics layer that utilizes the underlying data to generate dashboards and provide actionable insights.
- Identify improvement areas in the current data system and implement optimizations.
- Work on specific areas of data governance including metadata management and data quality management.
- Participate in discussions with Product Management and Business stakeholders to understand functional requirements and interact with other cross-functional teams as needed to develop, test, and release features.
- Develop Proof-of-Concepts to validate new technology solutions or advancements.
- Work in an Agile Scrum team and help with planning, scoping and creation of technical solutions for the new product capabilities, through to continuous delivery to production.
- Work on building intelligent systems using various AI/ML algorithms.
Desired Experience/Skill:
- Must have worked on Analytics Applications involving Data Lakes, Data Warehouses and Reporting Implementations.
- Experience with private and public cloud architectures with pros/cons.
- Ability to write robust code in Python and SQL for data processing. Experience in libraries such as Pandas is a must; knowledge of one of the frameworks such as Django or Flask is a plus.
- Experience in implementing data processing pipelines using AWS services: Kinesis, Lambda, Redshift/Snowflake, RDS.
- Knowledge of Kafka, Redis is preferred
- Experience on design and implementation of real-time and batch pipelines. Knowledge of Airflow is preferred.
- Familiarity with machine learning frameworks (like Keras or PyTorch) and libraries (like scikit-learn)
• Project Planning and Management
o Take end-to-end ownership of multiple projects / project tracks
o Create and maintain project plans and other related documentation for project
objectives, scope, schedule and delivery milestones
o Lead and participate across all the phases of software engineering, right from
requirements gathering to GO LIVE
o Lead internal team meetings on solution architecture, effort estimation, manpower
planning and resource (software/hardware/licensing) planning
o Manage RIDA (Risks, Impediments, Dependencies, Assumptions) for projects by
developing effective mitigation plans
• Team Management
o Act as the Scrum Master
o Conduct SCRUM ceremonies like Sprint Planning, Daily Standup, Sprint Retrospective
o Set clear objectives for the project and roles/responsibilities for each team member
o Train and mentor the team on their job responsibilities and SCRUM principles
o Make the team accountable for their tasks and help the team in achieving them
o Identify the requirements and come up with a plan for Skill Development for all team
members
• Communication
o Be the Single Point of Contact for the client in terms of day-to-day communication
o Periodically communicate project status to all the stakeholders (internal/external)
• Process Management and Improvement
o Create and document processes across all disciplines of software engineering
o Identify gaps and continuously improve processes within the team
o Encourage team members to contribute towards process improvement
o Develop a culture of quality and efficiency within the team
Must have:
• Minimum 08 years of experience (hands-on as well as leadership) in software / data engineering
across multiple job functions like Business Analysis, Development, Solutioning, QA, DevOps and
Project Management
• Hands-on as well as leadership experience in Big Data Engineering projects
• Experience developing or managing cloud solutions using Azure or other cloud provider
• Demonstrable knowledge on Hadoop, Hive, Spark, NoSQL DBs, SQL, Data Warehousing, ETL/ELT,
DevOps tools
• Strong project management and communication skills
• Strong analytical and problem-solving skills
• Strong systems level critical thinking skills
• Strong collaboration and influencing skills
Good to have:
• Knowledge on PySpark, Azure Data Factory, Azure Data Lake Storage, Synapse Dedicated SQL
Pool, Databricks, PowerBI, Machine Learning, Cloud Infrastructure
• Background in BFSI with focus on core banking
• Willingness to travel
Work Environment
• Customer Office (Mumbai) / Remote Work
Education
• UG: B. Tech - Computers / B. E. – Computers / BCA / B.Sc. Computer Science
Principal Accountabilities :
1. Good in communication and converting business requirements to functional requirements
2. Develop data-driven insights and machine learning models to identify and extract facts from sales, supply chain and operational data
3. Sound Knowledge and experience in statistical and data mining techniques: Regression, Random Forest, Boosting Trees, Time Series Forecasting, etc.
5. Experience in SOTA Deep Learning techniques to solve NLP problems.
6. End-to-end data collection, model development and testing, and integration into production environments.
7. Build and prototype analysis pipelines iteratively to provide insights at scale.
8. Experience in querying different data sources
9. Partner with developers and business teams for the business-oriented decisions
10. Looking for someone who dares to move on even when the path is not clear and be creative to overcome challenges in the data.
Responsibilities
- Interpret data, analyze results using statistical techniques and provide ongoing reports
- Develop and implement databases, data collection systems, data analytics, and other strategies that optimize statistical efficiency and quality
- Acquire data from primary or secondary data sources and maintain databases/data systems
- Identify, analyze, and interpret trends or patterns in complex data sets
- Filter and clean data by reviewing computer reports, printouts, and performance indicators to locate and correct code problems
- Work with the teams to prioritize business and information needs
- Locate and define new process improvement opportunities
Requirements-
- Minimum 1 year of working experience as a Data Analyst or Business Data Analyst
- Technical expertise with data models, database design development, data mining, and segmentation techniques
- Strong knowledge of and experience with reporting packages (Business Objects etc), databases (SQL, etc), programming (XML, JavaScript, or ETL frameworks)
- Knowledge of statistics and experience using statistical packages for analyzing datasets (Excel, SPSS, SAS, etc)
- Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy
- Excellent written and verbal communication skills for coordinating across teams.
- A drive to learn and master new technologies and techniques.
Company Name: Intraedge Technologies Ltd ( https://intraedge.com/ )
Type: Permanent, Full time
Location: Any
A Bachelor’s degree in computer science, computer engineering, other technical discipline, or equivalent work experience
- 4+ years of software development experience
- 4+ years exp in programming languages- Python, spark, Scala, Hadoop, hive
- Demonstrated experience with Agile or other rapid application development methods
- Demonstrated experience with object-oriented design and coding.
Please mail you rresume to poornimakattherateintraedgedotcom along with NP, how soon can you join, ECTC, Availability for interview, Location
We celebrate diversity, embrace a data-driven culture, and deeply encourage professional development through classes, certifications, and conferences. The reciprocity of sharing knowledge and growth with each other, our clients, and partners is a foundation we live by. Employees at Shyftlabs enjoy unlimited paid time off, 11 paid holidays, comprehensive health, vision, and dental benefits, and profit-sharing.
Key Responsibilities
- Design, implement and operate stable, scalable, low-cost solutions to flow data from production systems into the data lake and into end-user-facing applications.
- Design automated processes for in-depth analysis databases.
- Design automated data control processes.
- Collaborate with the software development team to build and test the designed solutions.
- Learn, publish, analyze and improve management information dashboards, operational business metrics decks, and key performance indicators.
- Improve tools, and processes, scale existing solutions, and create new solutions as required based on stakeholder needs.
- Provide in-depth analysis to management with the support of accounting, finance, and transportation teams.
- Perform monthly variance analysis and identify risks & opportunities.
Basic Qualifications
- 3+ years of experience as a Data Engineer or in a similar role
- Experience with data modeling, data warehousing, and building ETL pipelines
- Experience in SQL
Preferred Qualifications
- Degree in Computer Science, Engineering, Mathematics, or a related field and 4+ years of industry experience
- Graduate degree in Computer Science, Engineering or related technical field
- Knowledge of professional software engineering practices & best practices for the full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations
- Proficiency with at least one Object Oriented language (e.g. Java, Python, Ruby)
- Strong customer focus, ownership, urgency, and drive.
- Excellent communication skills and the ability to work well in a team.
- Effective analytical, troubleshooting, and problem-solving skills.
- Experience building data products incrementally and integrating and managing datasets from multiple sources
- Experience with AWS Tools and Technologies (Redshift, S3, EC2, Glue)
- Expertise with Data modeling skills, Advanced SQL with Oracle, MySQL, and Columnar Databases
- Experience with Snowflake
GCP Data Analyst profile must have below skills sets :
- Knowledge of programming languages like SQL , Oracle, R, MATLAB, Java and Python
- Data cleansing, data visualization, data wrangling
- Data modeling , data warehouse concepts
- Adapt to Big data platform like Hadoop, Spark for stream & batch processing
- GCP (Cloud Dataproc, Cloud Dataflow, Cloud Datalab, Cloud Dataprep, BigQuery, Cloud Datastore, Cloud Datafusion, Auto ML etc)
PriceLabs ( chicagobusiness.com/innovators/what-if-you-could-adjust-prices-meet-demand ) is a cloud based software for vacation and short term rentals to help them dynamically manage prices just the way large hotels and airlines do! Our mission is to help small businesses in the travel and tourism industry by giving them access to advanced analytical systems that are often restricted to large companies.
We're looking for someone with strong analytical capabilities who wants to understand how our current architecture and algorithms work, and help us design and develop long lasting solutions to address those. Depending on the needs of the day, the role will come with a good mix of team-work, following our best practices, introducing us to industry best practices, independent thinking, and ownership of your work.
Responsibilities:
- Design, develop and enhance our pricing algorithms to enable new capabilities.
- Process, analyze, model, and visualize findings from our market level supply and demand data.
- Build and enhance internal and customer facing dashboards to better track metrics and trends that help customers use PriceLabs in a better way.
- Take ownership of product ideas and design discussions.
- Occasional travel to conferences to interact with prospective users and partners, and learn where the industry is headed.
Requirements:
- Bachelors, Masters or Ph. D. in Operations Research, Industrial Engineering, Statistics, Computer Science or other quantitative/engineering fields.
- Strong understanding of analysis of algorithms, data structures and statistics.
- Solid programming experience. Including being able to quickly prototype an idea and test it out.
- Strong communication skills, including the ability and willingness to explain complicated algorithms and concepts in simple terms.
- Experience with relational databases and strong knowledge of SQL.
- Experience building data heavy analytical models in the travel industry.
- Experience in the vacation rental industry.
- Experience developing dynamic pricing models.
- Prior experience working at a fast paced environment.
- Willingness to wear many hats.