- 3+ years of experience in Machine Learning
- Bachelors/Masters in Computer Engineering/Science.
- Bachelors/Masters in Engineering/Mathematics/Statistics with sound knowledge of programming and computer concepts.
- 10 and 12th acedemics 70 % & above.
Skills :
- Strong Python/ programming skills
- Good conceptual understanding of Machine Learning/Deep Learning/Natural Language Processing
- Strong verbal and written communication skills.
- Should be able to manage team, meet project deadlines and interface with clients.
- Should be able to work across different domains and quickly ramp up the business processes & flows & translate business problems into the data solutions
Similar jobs
Data Engineer
Responsibilities:
- Designing and implementing fine-tuned production ready data/ML pipelines in Hadoop platform.
- Driving optimization, testing and tooling to improve quality.
- Reviewing and approving high level & amp; detailed design to ensure that the solution delivers to the business needs and aligns to the data & analytics architecture principles and roadmap.
- Understanding business requirements and solution design to develop and implement solutions that adhere to big data architectural guidelines and address business requirements.
- Following proper SDLC (Code review, sprint process).
- Identifying, designing, and implementing internal process improvements: automating manual processes, optimizing data delivery, etc.
- Building robust and scalable data infrastructure (both batch processing and real-time) to support needs from internal and external users.
- Understanding various data security standards and using secure data security tools to apply and adhere to the required data controls for user access in the Hadoop platform.
- Supporting and contributing to development guidelines and standards for data ingestion.
- Working with a data scientist and business analytics team to assist in data ingestion and data related technical issues.
- Designing and documenting the development & deployment flow.
Requirements:
- Experience in developing rest API services using one of the Scala frameworks.
- Ability to troubleshoot and optimize complex queries on the Spark platform
- Expert in building and optimizing ‘big data’ data/ML pipelines, architectures and data sets.
- Knowledge in modelling unstructured to structured data design.
- Experience in Big Data access and storage techniques.
- Experience in doing cost estimation based on the design and development.
- Excellent debugging skills for the technical stack mentioned above which even includes analyzing server logs and application logs.
- Highly organized, self-motivated, proactive, and ability to propose best design solutions.
- Good time management and multitasking skills to work to deadlines by working independently and as a part of a team.
A Bachelor’s degree in data science, statistics, computer science, or a similar field
2+ years industry experience working in a data science role, such as statistics, machine learning,
deep learning, quantitative financial analysis, data engineering or natural language processing
Domain experience in Financial Services (banking, insurance, risk, funds) is preferred
Have and experience and be involved in producing and rapidly delivering minimum viable products,
results focused with ability to prioritize the most impactful deliverables
Strong Applied Statistics capabilities. Including excellent understanding of Machine Learning
techniques and algorithms
Hands on experience preferable in implementing scalable Machine Learning solutions using Python /
Scala / Java on Azure, AWS or Google cloud platform
Experience with storage frameworks like Hadoop, Spark, Kafka etc
Experience in building &deploying unsupervised, semi-supervised, and supervised models and be
knowledgeable in various ML algorithms such as regression models, Tree-based algorithms,
ensemble learning techniques, distance-based ML algorithms etc
Ability to track down complex data quality and data integration issues, evaluate different algorithmic
approaches, and analyse data to solve problems.
Experience in implementing parallel processing and in-memory frameworks such as H2O.ai
Job Description - Data Engineer
About us
Propellor is aimed at bringing Marketing Analytics and other Business Workflows to the Cloud ecosystem. We work with International Clients to make their Analytics ambitions come true, by deploying the latest tech stack and data science and engineering methods, making their business data insightful and actionable.
What is the role?
This team is responsible for building a Data Platform for many different units. This platform will be built on Cloud and therefore in this role, the individual will be organizing and orchestrating different data sources, and
giving recommendations on the services that fulfil goals based on the type of data
Qualifications:
• Experience with Python, SQL, Spark
• Knowledge/notions of JavaScript
• Knowledge of data processing, data modeling, and algorithms
• Strong in data, software, and system design patterns and architecture
• API building and maintaining
• Strong soft skills, communication
Nice to have:
• Experience with cloud: Google Cloud Platform, AWS, Azure
• Knowledge of Google Analytics 360 and/or GA4.
Key Responsibilities
• Work on the core backend and ensure it meets the performance benchmarks.
• Designing and developing APIs for the front end to consume.
• Constantly improve the architecture of the application by clearing the technical backlog.
• Meeting both technical and consumer needs.
• Staying abreast of developments in web applications and programming languages.
Key Responsibilities
• Design and develop platform based on microservices architecture.
• Work on the core backend and ensure it meets the performance benchmarks.
• Work on the front end with ReactJS.
• Designing and developing APIs for the front end to consume.
• Constantly improve the architecture of the application by clearing the technical backlog.
• Meeting both technical and consumer needs.
• Staying abreast of developments in web applications and programming languages.
What are we looking for?
An enthusiastic individual with the following skills. Please do not hesitate to apply if you do not match all of it. We are open to promising candidates who are passionate about their work and are team players.
• Education - BE/MCA or equivalent.
• Agnostic/Polyglot with multiple tech stacks.
• Worked on open-source technologies – NodeJS, ReactJS, MySQL, NoSQL, MongoDB, DynamoDB.
• Good experience with Front-end technologies like ReactJS.
• Backend exposure – good knowledge of building API.
• Worked on serverless technologies.
• Efficient in building microservices in combining server & front-end.
• Knowledge of cloud architecture.
• Should have sound working experience with relational and columnar DB.
• Should be innovative and communicative in approach.
• Will be responsible for the functional/technical track of a project.
Whom will you work with?
You will closely work with the engineering team and support the Product Team.
Hiring Process includes :
a. Written Test on Python and SQL
b. 2 - 3 rounds of Interviews
Immediate Joiners will be preferred
- Design thinking to really understand the business problem
- Understanding new ways to deliver (agile, DT)
- Being able to do a functional design across S/4HANA and SCP). An understanding of the possibilities around automation/RPA (which should include UIPath, Blueprism, Contextor) and how these can be identified and embedded in business processes
- Following on from this, the same is true for AI and ML: What is available in SAP standard, how can these be enhanced/developed further, how these technologies can be embedded in the business process. There is no point in understanding the standard process, or the AI and ML components, we will need a new type of hybrid SAP practitioner.
Basic Qualifications:
- Five+ years experience working in a Big Data Software Development role
- Experience managing and deploying ML models in real world environments
- Bachelor's degree in Computer Science, Mathematics, Statistics, or other analytical fields
- Experience working with Python, Scala, Spark or other open-source software with data science libraries
- Experience in advanced math and statistics
- Excellent familiarity with command line linux environment
- Able to understand various data structures and common methods in data transformation
- Experience deploying machine learning models
Data Analyst
at Wheelseye Technology India Pvt Ltd.
About WheelsEye :
Logistics in India is a complex business - layered with multiple stakeholders, unorganized, primarily offline, and with many trivial yet deep-rooted problems. Though this industry contributes 14% to the GDP, its problems have gone unattended and ignored, until now.
WheelsEye is a logistics company, building a digital infrastructure around fleet owners. Currently, we offer solutions to empower truck fleet owners. Our proprietary software & hardware solutions help automate operations, secure fleet, save costs, improve on-time performance, and streamline their business.
Why WheelsEye?
- Work on a real Indian problem of scale impact lives of 5.5 cr fleet owners, drivers and their families in a meaningful way
- Different from current market players, heavily focused and built around truck owners Problem solving and learning-oriented organization
- Audacious goals, high speed, and action orientation
- Opportunity to scale the organization across the country
- Opportunity to build and execute the culture
- Contribute to and become a part of the action plan for building the tech, finance, and service infrastructure for the logistics industry It's Tough!
Requirements:
- Bachelor’s degree with additional 2-5 years experience in analytics domain
- Experience in articulating and translating business questions and using statistical techniques to arrive at an answer using available data
- Proficient with scripting and/or programming language, e.g. Python, R(Optional), Advanced SQL; advanced knowledge of data processing, database programming and data analytics tools and techniques
- Extensive background in data mining, modelling and statistical analysis; able to understand various data structures and common methods in data transformation e.g. Linear and logistic regression, clustering, decision trees etc.
- Working knowledge of tools like Mixpanel, Metabase, Google sheets, Google BigQuery & Data studio is preferred
- Ability to self-start and self-directed work in a fast-paced environment
If you are willing to work on solving real world problems for truck owners, Join us!
- Build a team with skills in ETL, reporting, MDM and ad-hoc analytics support
- Build technical solutions using latest open source and cloud based technologies
- Work closely with offshore senior consultant, onshore team and client's business and IT teams to gather project requirements
- Assist overall project execution from India - starting from project planning, team formation system design and development, testing, UAT and deployment
- Build demos and POCs in support of business development for new and existing clients
- Prepare project documents and PowerPoint presentations for client communication
- Conduct training sessions to train associates and help shape their growth
Senior Data Scientist
at SigTuple