Required Skills:
- Proficient with Scala language (Akka HTTP, actors, streams) or Python language
- Proficient with Spark & writing MapReduce applications
- Experience building and optimizing ETL processes, data pipelines, architectures
- Experience with implementation of any data warehouse (Snowflake, BigQuery, Redshift)
- Preferred: Experience with any Graph DB
- NoSQL databases like Mongodb or ElasticSeach
About Algoscale Technologies
Algoscale is a boutique Big Data Analytics and Data Science firm incorporated in the US with its development center in Noida, India. Applying analytical tools, techniques, and technology, we help organizations gain valuable insights that accelerate business decision making and increase profitability. We deliver value by combining data, analytics and AI helping businesses to create competitive advantage. Our talent pool of data scientists, engineers, and business analysts come from strong educational and professional backgrounds and have an in-depth understanding of analytics backed by rich domain experience. From building technology infrastructure to support zillions of data points to finding patterns among disparate data sources and deploying analytics platform, we provide solutions through the data lifecycle. At Algoscale, we love data. To know more visit www.algoscale.com
Similar jobs
- 3+ years experience in practical implementation and deployment of ML based systems preferred.
- BE/B Tech or M Tech (preferred) in CS/Engineering with strong mathematical/statistical background
- Strong mathematical and analytical skills, especially statistical and ML techniques, with familiarity with different supervised and unsupervised learning algorithms
- Implementation experiences and deep knowledge of Classification, Time Series Analysis, Pattern Recognition, Reinforcement Learning, Deep Learning, Dynamic Programming and Optimisation
- Experience in working on modeling graph structures related to spatiotemporal systems
- Programming skills in Python
- Experience in developing and deploying on cloud (AWS or Google or Azure)
- Good verbal and written communication skills
- Familiarity with well-known ML frameworks such as Pandas, Keras, TensorFlow
Job brief
We are looking for a Lead Data Scientist to lead a technical team and help us gain
useful insight out of raw data.
Lead Data Scientist responsibilities include managing the data science team, planning
projects and building analytics models. You should have a strong problem-solving ability
and a knack for statistical analysis. If you’re also able to align our data products with our
business goals, we’d like to meet you.
Your ultimate goal will be to help improve our products and business decisions by
making the most out of our data.
Responsibilities
● Conceive, plan and prioritize data projects
● Ensure data quality and integrity
● Interpret and analyze data problems
● Build analytic systems and predictive models
● Align data projects with organizational goals
● Lead data mining and collection procedures
● Test performance of data-driven products
● Visualize data and create reports
● Build and manage a team of data scientists and data engineers
Requirements
● Proven experience as a Data Scientist or similar role
● Solid understanding of machine learning
● Knowledge of data management and visualization techniques
● A knack for statistical analysis and predictive modeling
● Good knowledge of R, Python and MATLAB
● Experience with SQL and NoSQL databases
● Strong organizational and leadership skills
● Excellent communication skills
● A business mindset
● Degree in Computer Science, Data Science, Mathematics or similar field
● Familiar with emerging/cutting edge, open source, data science/machine learning
libraries/big data platforms
We are looking for a Big Data Engineer with java for Chennai Location
Location : Chennai
Exp : 11 to 15 Years
Job description
Required Skill:
1. Candidate should have minimum 7 years of experience as total
2. Candidate should have minimum 4 years of experience in Big Data design and development
3. Candidate should have experience in Java, Spark, Hive & Hadoop, Python
4. Candidate should have experience in any RDBMS.
Roles & Responsibility:
1. To create work plans, monitor and track the work schedule for on time delivery as per the defined quality standards.
2. To develop and guide the team members in enhancing their technical capabilities and increasing productivity.
3. To ensure process improvement and compliance in the assigned module, and participate in technical discussions or review.
4. To prepare and submit status reports for minimizing exposure and risks on the project or closure of escalation
Regards,
Priyanka S
7P8R9I9Y4A0N8K8A7S7
Carsome’s Data Department is on the lookout for a Data Scientist/Data Science Lead who has a strong passion in building data powered products.
Data Science function under the Data Department has a responsibility for standardisation of methods, mentoring team of data science resources/interns, including code libraries and documentation, quality assurance of outputs, modeling techniques and statistics, leveraging a variety of technologies, open-source languages, and cloud computing platform.
You will get to lead & implement projects such as price optimization/prediction, enabling iconic personalization experiences for our customer, inventory optimization etc.
Job Descriptions
- Identifying and integrating datasets that can be leveraged through our product and work closely with data engineering team to develop data products.
- Execute analytical experiments methodically to help solve various problems and make a true impact across functions such as operations, finance, logistics, marketing.
- Identify, prioritize, and design testing opportunities that will inform algorithm enhancements.
- Devise and utilize algorithms and models to mine big data stores, perform data and error analysis to improve models and clean and validate data for uniformity and accuracy.
- Unlock insights by analyzing large amounts of complex website traffic and transactional data.
- Implement analytical models into production by collaborating with data analytics engineers.
Technical Requirements
- Expertise in model design, training, evaluation, and implementation ML Algorithm expertise K-nearest neighbors, Random Forests, Naive Bayes, Regression Models. PyTorch, TensorFlow, Keras, deep learning expertise, tSNE, gradient boosting expertise, regression implementation expertise, Python, Pyspark, SQL, R, AWS Sagemaker /personalize etc.
- Machine Learning / Data Science Certification
Experience & Education
- Bachelor’s in Engineering / Master’s in Data Science / Postgraduate Certificate in Data Science.
This profile will include the following responsibilities:
- Develop Parsers for XML and JSON Data sources/feeds
- Write Automation Scripts for product development
- Build API Integrations for 3rd Party product integration
- Perform Data Analysis
- Research on Machine learning algorithms
- Understand AWS cloud architecture and work with 3 party vendors for deployments
- Resolve issues in AWS environmentWe are looking for candidates with:
Qualification: BE/BTech/Bsc-IT/MCA
Programming Language: Python
Web Development: Basic understanding of Web Development. Working knowledge of Python Flask is desirable
Database & Platform: AWS/Docker/MySQL/MongoDB
Basic Understanding of Machine Learning Models & AWS Fundamentals is recommended.
What you will do:
- Assembling large, complex data sets
- Assisting Data Science initiatives with data set preparation, annotation etc.
- Enabling Data Science teams to design solutions that are highly scalable and available
- Managing code pipelines for ML and Reporting
What you need to have:
- B.Tech/ B.E (Masters would be an advantage)
- Expert level experience in Python Development
- Knowledge of Database and Data Structures (My SQL and Postgre)
- Good hands-on experience of SQL
- Experience working on real time production systems (either of Data Science/ Application development is fine)
- Experience of backend development that includes python APIs, Flask, integrations etc.
- Exposure of MLOps (Operationalizing statistical/ ML algorithms)
- Experience working on Data Science engagements/ Use cases
- Basic knowledge of Statistics
- Exposure to any of these- NLP/ Statistical Modelling/ Image Analytics
- Good aptitude to understand business use cases/ requirements
- Previous experience of technical solutioning
- Excellent problem solving skills
- Good Communication skills
What You’ll Do
- Synthesize data to provide actionable insights to the leadership and the product team to influence strategy and growth.
- Build dashboards and tools, craft analysis, and tell stories with data to help our teams make better decisions.
- Track user performance data and map it to key learning metrics for deeper insights
- Monitor the performance of new features and recommend iterations based on your findings
- Partner with other stakeholders to scope features that solve user problems
- Ownership of full analytics for the Business Unit/ Performance metric: Planning, structuring, identifying key reports to be made, and automation of the reports
- Monitor and measure launched initiatives [A/B testing] and feed learnings back into the development process
- Prepare reports for the management stating trends, patterns, and predictions using relevant data
Required skills
- Bachelor’s degree in https://www.careerexplorer.com/degrees/mathematics-degree/">mathematics/ https://www.careerexplorer.com/degrees/finance-degree/">finance/https://www.careerexplorer.com/degrees/statistics-degree/">statistics, https://www.careerexplorer.com/degrees/economics-degree/">economics /https://www.careerexplorer.com/degrees/computer-science-degree/">computer science/information technology. Knowledge about EdTech, Learning Science have added advantage
- Strong statistical skills to help collect, measure, organize and analyze data
- 1-4 years of work experience in an EdTech product startup or analytics firm
- Experience in Data Visualisation, Excel, CleverTap & strong MongoDB database management skills
- Ability to learn new tech tools and softwares quickly
- Passion to work with data for a larger social impact
- Familiarity with coding (specifically Node environment) is a plus
- Ability to transform numbers into stories
- Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub.
- Experience in migrating on-premise data warehouses to data platforms on AZURE cloud.
- Designing and implementing data engineering, ingestion, and transformation functions
-
Azure Synapse or Azure SQL data warehouse
-
Spark on Azure is available in HD insights and data bricks