- Manages the delivery of large, complex Data Science projects using appropriate frameworks and collaborating with stake holders to manage scope and risk. Help the AI/ML Solution
- Analyst to build solution as per customer need on our platform Newgen AI Cloud. Drives profitability and continued success by managing service quality and cost and leading delivery. Proactively support sales through innovative solutions and delivery excellence.
Work location: Gurugram
Key Responsibilities:
1 Collaborate/contribute to all project phases, technical know to design, develop solutions and deploy at customer end.
2 End-to-end implementations i.e. gathering requirements, analysing, designing, coding, deployment to Production
3 Client facing role talking to client on regular basis to get requirement clarification
4. Lead the team
Core Tech Skills: Azure, Cloud Computing, Java/Scala, Python, Design Patterns and fair knowledge of Data Science. Fair Knowledge of Data Lake/DWH
Educational Qualification: Engineering graduate preferably Computer since graduate
About Number Theory
Similar jobs
●Set department objectives.
●Hire, promote, motivate, train, mentor and incentivize the team.
●Innovate, Experiment, and Implement new technologies.
● Contribute to the next level of growth for the AI practice.
RESPONSIBILITY
●Lead and manage the AI team within the global AI practice
●Work closely with data scientists and AI engineers to create and deploy models catering to
customer requirements
●Establish scalable, efficient, automated processes for data analysis, model development,
validation, deployment, serving, and monitoring
●Work closely with data engineering practice to build and deploy end-to-end AI pipelines
including data processing, model training, and model deployment.
●Ability to build and deploy large-scale enterprise-ready solutions for AI.
●Own and deliver end-to-end large, complex projects within the AI practice.
●Support sales and BD process and present to CXO-level client representatives.
●Work with clients to identify new AI opportunities
●Prepare together with the Sales, Solutioning, and Engineering teams to develop and propose
cutting edge AI solutions
●Contribute to building AI proposals, attending Orals, and providing easy to understand
communications on AI
●Ability to manage Client Relationships
●Cooperate and contribute to Global AI programs
●Reviews proposed designs and make recommendations for improvement.
●Contribute to and promote good software engineering practices across the team.
●Knowledge sharing with the team to adopt best practices,
●Actively contribute to and re-use community best practices.
About Our Company:
●We built an end-to-end AI framework to help our clients to accelerate their journey to launch
models
●We work closely with academic experts and research groups to solve some of the niches
problems in medical imaging, biopharma, life sciences, law firms, retail, and agriculture
●Work environment – we have an environment to create an impact on the client's business and
transform innovative ideas into reality. Even our junior engineers get the opportunity to work
on different product features in complex domains
●Open communication, flat hierarchy, plenty of individual responsibility
We are looking for a python developer who has a passion to drive more solar and clean energy in the world working with us. The software helps anyone understand how much solar could be put up on a rooftop and calculates how many units of clean energy the solar PV system would generate, along with how much savings the homeowner would have. This is a crucial step in helping educate people who want to go solar, but aren’t completely convinced of solar's value proposition. If you are interested in bringing the latest technologies to the fast-growing solar industry and want to help society transition to a more sustainable future, we would love to hear from you!
You will -
- Be an early employee at a growing startup and help shape the team culture
- Safeguard code quality on their team, reviewing others’ code with an eye to performance and maintainability
- Be trusted to take point on complex product initiatives
- Work in a ownership driven, micro-management free environment
You should have:
- Strong programming fundamentals. (if you don’t officially have a CS degree but know programming, it’s fine with us!)
- Have a strong problem solving attitude.
- Experience with solar or electrical modelling is a plus, although not required.
• Problem Solving:. Resolving production issues to fix service P1-4 issues. Problems relating to
introducing new technology, and resolving major issues in the platform and/or service.
• Software Development Concepts: Understands and is experienced with the use of a wide range of
programming concepts and is also aware of and has applied a range of algorithms.
• Commercial & Risk Awareness: Able to understand & evaluate both obvious and subtle commercial
risks, especially in relation to a programme.
Experience you would be expected to have
• Cloud: experience with one of the following cloud vendors: AWS, Azure or GCP
• GCP : Experience prefered, but learning essential.
• Big Data: Experience with Big Data methodology and technologies
• Programming : Python or Java worked with Data (ETL)
• DevOps: Understand how to work in a Dev Ops and agile way / Versioning / Automation / Defect
Management – Mandatory
• Agile methodology - knowledge of Jira
Experience in developing lambda functions with AWS Lambda
Expertise with Spark/PySpark – Candidate should be hands on with PySpark code and should be able to do transformations with Spark
Should be able to code in Python and Scala.
Snowflake experience will be a plus
- Experience with Cloud native Data tools/Services such as AWS Athena, AWS Glue, Redshift Spectrum, AWS EMR, AWS Aurora, Big Query, Big Table, S3, etc.
- Strong programming skills in at least one of the following languages: Java, Scala, C++.
- Familiarity with a scripting language like Python as well as Unix/Linux shells.
- Comfortable with multiple AWS components including RDS, AWS Lambda, AWS Glue, AWS Athena, EMR. Equivalent tools in the GCP stack will also suffice.
- Strong analytical skills and advanced SQL knowledge, indexing, query optimization techniques.
- Experience implementing software around data processing, metadata management, and ETL pipeline tools like Airflow.
Experience with the following software/tools is highly desired:
- Apache Spark, Kafka, Hive, etc.
- SQL and NoSQL databases like MySQL, Postgres, DynamoDB.
- Workflow management tools like Airflow.
- AWS cloud services: RDS, AWS Lambda, AWS Glue, AWS Athena, EMR.
- Familiarity with Spark programming paradigms (batch and stream-processing).
- RESTful API services.
- Modeling complex problems, discovering insights, and identifying opportunities through the use of statistical, algorithmic, mining, and visualization techniques
- Experience working with business understanding the requirement, creating the problem statement, and building scalable and dependable Analytical solutions
- Must have hands-on and strong experience in Python
- Broad knowledge of fundamentals and state-of-the-art in NLP and machine learning
- Strong analytical & algorithm development skills
- Deep knowledge of techniques such as Linear Regression, gradient descent, Logistic Regression, Forecasting, Cluster analysis, Decision trees, Linear Optimization, Text Mining, etc
- Ability to collaborate across teams and strong interpersonal skills
Skills
- Sound theoretical knowledge in ML algorithm and their application
- Hands-on experience in statistical modeling tools such as R, Python, and SQL
- Hands-on experience in Machine learning/data science
- Strong knowledge of statistics
- Experience in advanced analytics / Statistical techniques – Regression, Decision trees, Ensemble machine learning algorithms, etc
- Experience in Natural Language Processing & Deep Learning techniques
- Pandas, NLTK, Scikit-learn, SpaCy, Tensorflow
• Help build a Data Science team which will be engaged in researching, designing,
implementing, and deploying full-stack scalable data analytics vision and machine learning
solutions to challenge various business issues.
• Modelling complex algorithms, discovering insights and identifying business
opportunities through the use of algorithmic, statistical, visualization, and mining techniques
• Translates business requirements into quick prototypes and enable the
development of big data capabilities driving business outcomes
• Responsible for data governance and defining data collection and collation
guidelines.
• Must be able to advice, guide and train other junior data engineers in their job.
Must Have:
• 4+ experience in a leadership role as a Data Scientist
• Preferably from retail, Manufacturing, Healthcare industry(not mandatory)
• Willing to work from scratch and build up a team of Data Scientists
• Open for taking up the challenges with end to end ownership
• Confident with excellent communication skills along with a good decision maker
- We are looking for a Data Engineer to build the next-generation mobile applications for our world-class fintech product.
- The candidate will be responsible for expanding and optimising our data and data pipeline architecture, as well as optimising data flow and collection for cross-functional teams.
- The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimising data systems and building them from the ground up.
- Looking for a person with a strong ability to analyse and provide valuable insights to the product and business team to solve daily business problems.
- You should be able to work in a high-volume environment, have outstanding planning and organisational skills.
Qualifications for Data Engineer
- Working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Experience building and optimising ‘big data’ data pipelines, architectures, and data sets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- Experience supporting and working with cross-functional teams in a dynamic environment.
- Looking for a candidate with 2-3 years of experience in a Data Engineer role, who is a CS graduate or has an equivalent experience.
What we're looking for?
- Experience with big data tools: Hadoop, Spark, Kafka and other alternate tools.
- Experience with relational SQL and NoSQL databases, including MySql/Postgres and Mongodb.
- Experience with data pipeline and workflow management tools: Luigi, Airflow.
- Experience with AWS cloud services: EC2, EMR, RDS, Redshift.
- Experience with stream-processing systems: Storm, Spark-Streaming.
- Experience with object-oriented/object function scripting languages: Python, Java, Scala.