We are looking out for a technically driven "Full-Stack Engineer" for one of our premium client
COMPANY DESCRIPTION:
Qualifications
• Bachelor's degree in computer science or related field; Master's degree is a plus
• 3+ years of relevant work experience
• Meaningful experience with at least two of the following technologies: Python, Scala, Java
• Strong proven experience on distributed processing frameworks (Spark, Hadoop, EMR) and SQL is very
much expected
• Commercial client-facing project experience is helpful, including working in close-knit teams
• Ability to work across structured, semi-structured, and unstructured data, extracting information and
identifying linkages across disparate data sets
• Confirmed ability in clearly communicating complex solutions
• Understandings on Information Security principles to ensure compliant handling and management of
client data
• Experience and interest in Cloud platforms such as: AWS, Azure, Google Platform or Databricks
• Extraordinary attention to detail
Similar jobs
Job Responsibilities:
- Developing highly reliable web crawlers and parsers across various websites
- Extract structured/unstructured data and store them into SQL/No SQL database
- Work closely with Product/Research/Technology teams to provide data for analysis
- Develop frameworks for automating and maintaining constant flow of data from multiple sources
- Develop and maintain data pipelines for batch/incremental as well as real-time requirements.
- Develop a deep understanding of the data sources on the web and know exactly how, when, and which data to parse and store this data
- Create a monitoring framework to identify anomalies in web crawlers and resolve for contingencies
- Implement best practices in-house to detect / prevent crawlers on internal systems and websites
- Writing and running queries on large datasets to support analytics team or data sharing requirements.
- Dealing well with ambiguity, prioritizing needs, and delivering results in a dynamic environment
Must-Have:
- Proficient knowledge in Python language and excellent knowledge on Web Crawling in Python Scrapy / Beautifulsoup / URLlib / Selenium / WebHarvest etc.
- Experience in Data parsing and understanding of document structure in HTML – CSS/DOM/XPATH. Knowledge of JS would be a plus
- Strong experience in Data Parsing
- Experience in working with large datasets, querying terabytes of data on a regular basis – proficient in SQL
- Must be able to develop reusable code-based crawlers that are easy to modify / transform
- Proficient in GIT and better understanding of launching instances and setting up crawlers on AWS/Azure
- Understands detailed requirements and demonstrates excellent problem-solving skills
- Strong sense of ownership, drive, and ability to deliver results.
- A track record of digging in to the tough problems / challenges and bringing innovative approaches to solve for such situations. Must be highly capable of self-teaching new techniques.
B.E/B.Tech in Computer Science / IT, BCA, B.Sc in Computer Science / IT
About the role
We are looking for a passionate Data Analyst in the Finance Reporting function who would be working closely with the business/product teams to enable data-driven decision making. The successful candidate will turn data into information, information into insights and insights into business decisions.
Job responsibility
- You will work closely with internal Teams to identify, define, collect, and track key business metrics.
- You will pull data required to conduct business analysis, build reports, dashboards and metrics to monitor the performance.
- You will execute quantitative analysis that translates data into actionable insights
- You will work with and influence stakeholders to make data-driven decisions
- Have to use analytical techniques to solve problems
A successful candidate for this role must require,
- Bachelors from reputed institute or MBA
- 2+ Years of experience in a relevant role
- Worked on unstructured problems into an analytical framework
- Ability to experiment with alternate analytical techniques to solve a problem
- Exceptional written and verbal communication skills
- Knowledge of Finance or Business Operations is a plus
- Good handle in SQL and basic programming knowledge
- MS Office including a strong grasp of Excel (Charting, Formulae, Pivots, Macros)
- Experience with BI tools (Power BI, Tableau etc) ( Preferable)
About ekinacre
Join the fastest-growing health benefits platform in India. ekincare is a Series A funded startup, Operating in one of the few industries with tailwinds from COVID-19. We are at the intersection of health & insure-tech, targeting South East Asia’s multi-billion-dollar corporate health benefits market. Trusted by Fortune 500 companies, ekincare’s patent-pending preventive, predictive, and personalized platform, helps employers administer their health benefits efficiently, reduce health care costs by 20% and increase employees’ engagement.
Recognized for our innovations by NASSCOM (10 most innovative start-ups), Aegis Graham Bell Awards (Most innovative enterprise app), and named as the best "Healthy Workplace Brand" in the IHW Summit.
We are proud of Creating a state of the art digitization AI, that unlocks a whole new world of healthcare data, that forms the core of our recommendation engine (https://www.ekincare.com/blog/using-ai-machine-learning-to-digitize-health-records-d5c34451-1176-4f8b-a8f9-204c046ec30e" target="_blank">click to know how)
For more details about us please visit http://www.ekincare.com/" target="_blank">www.ekincare.com
What we offer in return is the opportunity to join a talented team of bright people and to also enjoy:
- 5-day work week, leave policy covering various work time-off benefits including maternity and paternity leave benefits.
- Premium Group medical Insurance for the employee and 3 dependents, personal accident insurance coverage, life insurance coverage.
- Access to ekincare app with all the free access to the features like Annual Health checkup, Covid screening and test, online doctor consultation, Gym access and many more.
ekincare is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity status.
Referral Request: If this is not for you, perhaps you have a friend who would be a perfect fit. Please share this job link. Thanks!
Designation: Specialist - Cloud Service Developer (ABL_SS_600)
Position description:
- The person would be primary responsible for developing solutions using AWS services. Ex: Fargate, Lambda, ECS, ALB, NLB, S3 etc.
- Apply advanced troubleshooting techniques to provide Solutions to issues pertaining to Service Availability, Performance, and Resiliency
- Monitor & Optimize the performance using AWS dashboards and logs
- Partner with Engineering leaders and peers in delivering technology solutions that meet the business requirements
- Work with the cloud team in agile approach and develop cost optimized solutions
Primary Responsibilities:
- Develop solutions using AWS services includiing Fargate, Lambda, ECS, ALB, NLB, S3 etc.
Reporting Team
- Reporting Designation: Head - Big Data Engineering and Cloud Development (ABL_SS_414)
- Reporting Department: Application Development (2487)
Required Skills:
- AWS certification would be preferred
- Good understanding in Monitoring (Cloudwatch, alarms, logs, custom metrics, Trust SNS configuration)
- Good experience with Fargate, Lambda, ECS, ALB, NLB, S3, Glue, Aurora and other AWS services.
- Preferred to have Knowledge on Storage (S3, Life cycle management, Event configuration)
- Good in data structure, programming in (pyspark / python / golang / Scala)
Sr Data Engineer - (Python, Pandas)
at SteelEye
What you’ll do
- Deliver plugins for our Python-based ETL pipelines.
- Deliver Python microservices for provisioning and managing cloud infrastructure.
- Implement algorithms to analyse large data sets.
- Draft design documents that translate requirements into code.
- Deal with challenges associated with handling large volumes of data.
- Assume responsibilities from technical design through technical client support.
- Manage expectations with internal stakeholders and context-switch in a fast paced environment.
- Thrive in an environment that uses AWS and Elasticsearch extensively.
- Keep abreast of technology and contribute to the engineering strategy.
- Champion best development practices and provide mentorship.
What we’re looking for
- Experience in Python 3.
- Python libraries used for data (such as pandas, numpy).
- AWS.
- Elasticsearch.
- Performance tuning.
- Object Oriented Design and Modelling.
- Delivering complex software, ideally in a FinTech setting.
- CI/CD tools.
- Knowledge of design patterns.
- Sharp analytical and problem-solving skills.
- Strong sense of ownership.
- Demonstrable desire to learn and grow.
- Excellent written and oral communication skills.
- Mature collaboration and mentoring abilities.
About SteelEye Culture
- Work from home until you are vaccinated against COVID-19
- Top of the line health insurance • Order discounted meals every day from a dedicated portal
- Fair and simple salary structure
- 30+ holidays in a year
- Fresh fruits every day
- Centrally located. 5 mins to the nearest metro station (MG Road)
- Measured on output and not input
Data Scientist
applied research.
● Understand, apply and extend state-of-the-art NLP research to better serve our customers.
● Work closely with engineering, product, and customers to scientifically frame the business problems and come up with the underlying AI models.
● Design, implement, test, deploy, and maintain innovative data and machine learning solutions to accelerate our business.
● Think creatively to identify new opportunities and contribute to high-quality publications or patents.
Desired Qualifications and Experience
● At Least 1 year of professional experience.
● Bachelors in Computer Science or related fields from the top colleges.
● Extensive knowledge and practical experience in one or more of the following areas: machine learning, deep learning, NLP, recommendation systems, information retrieval.
● Experience applying ML to solve complex business problems from scratch.
● Experience with Python and a deep learning framework like Pytorch/Tensorflow.
● Awareness of the state of the art research in the NLP community.
● Excellent verbal and written communication and presentation skills.
Develop complex queries, pipelines and software programs to solve analytics and data mining problems
Interact with other data scientists, product managers, and engineers to understand business problems, technical requirements to deliver predictive and smart data solutions
Prototype new applications or data systems
Lead data investigations to troubleshoot data issues that arise along the data pipelines
Collaborate with different product owners to incorporate data science solutions
Maintain and improve data science platform
Must Have
BS/MS/PhD in Computer Science, Electrical Engineering or related disciplines
Strong fundamentals: data structures, algorithms, database
5+ years of software industry experience with 2+ years in analytics, data mining, and/or data warehouse
Fluency with Python
Experience developing web services using REST approaches.
Proficiency with SQL/Unix/Shell
Experience in DevOps (CI/CD, Docker, Kubernetes)
Self-driven, challenge-loving, detail oriented, teamwork spirit, excellent communication skills, ability to multi-task and manage expectations
Preferred
Industry experience with big data processing technologies such as Spark and Kafka
Experience with machine learning algorithms and/or R a plus
Experience in Java/Scala a plus
Experience with any MPP analytics engines like Vertica
Experience with data integration tools like Pentaho/SAP Analytics Cloud
Data Science Software Engineer
at StatusNeo
Responsibilities Description:
Responsible for the development and implementation of machine learning algorithms and techniques to solve business problems and optimize member experiences. Primary duties may include are but not limited to: Design machine learning projects to address specific business problems determined by consultation with business partners. Work with data-sets of varying degrees of size and complexity including both structured and unstructured data. Piping and processing massive data-streams in distributed computing environments such as Hadoop to facilitate analysis. Implements batch and real-time model scoring to drive actions. Develops machine learning algorithms to build customized solutions that go beyond standard industry tools and lead to innovative solutions. Develop sophisticated visualization of analysis output for business users.
Experience Requirements:
BS/MA/MS/PhD in Statistics, Computer Science, Mathematics, Machine Learning, Econometrics, Physics, Biostatistics or related Quantitative disciplines. 2-4 years of experience in predictive analytics and advanced expertise with software such as Python, or any combination of education and experience which would provide an equivalent background. Experience in the healthcare sector. Experience in Deep Learning strongly preferred.
Required Technical Skill Set:
- Full cycle of building machine learning solutions,
o Understanding of wide range of algorithms and their corresponding problems to solve
o Data preparation and analysis
o Model training and validation
o Model application to the problem
- Experience using the full open source programming tools and utilities
- Experience in working in end-to-end data science project implementation.
- 2+ years of experience with development and deployment of Machine Learning applications
- 2+ years of experience with NLP approaches in a production setting
- Experience in building models using bagging and boosting algorithms
- Exposure/experience in building Deep Learning models for NLP/Computer Vision use cases preferred
- Ability to write efficient code with good understanding of core Data Structures/algorithms is critical
- Strong python skills following software engineering best practices
- Experience in using code versioning tools like GIT, bit bucket
- Experience in working in Agile projects
- Comfort & familiarity with SQL and Hadoop ecosystem of tools including spark
- Experience managing big data with efficient query program good to have
- Good to have experience in training ML models in tools like Sage Maker, Kubeflow etc.
- Good to have experience in frameworks to depict interpretability of models using libraries like Lime, Shap etc.
- Experience with Health care sector is preferred
- MS/M.Tech or PhD is a plus
We are looking for a savvy Data Engineer to join our growing team of analytics experts.
The hire will be responsible for:
- Expanding and optimizing our data and data pipeline architecture
- Optimizing data flow and collection for cross functional teams.
- Will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects.
- Must be self-directed and comfortable supporting the data needs of multiple teams, systems and products.
- Experience with Azure : ADLS, Databricks, Stream Analytics, SQL DW, COSMOS DB, Analysis Services, Azure Functions, Serverless Architecture, ARM Templates
- Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
- Experience with object-oriented/object function scripting languages: Python, SQL, Scala, Spark-SQL etc.
Nice to have experience with :
- Big data tools: Hadoop, Spark and Kafka
- Data pipeline and workflow management tools: Azkaban, Luigi, Airflow
- Stream-processing systems: Storm
Database : SQL DB
Programming languages : PL/SQL, Spark SQL
Looking for candidates with Data Warehousing experience, strong domain knowledge & experience working as a Technical lead.
The right candidate will be excited by the prospect of optimizing or even re-designing our company's data architecture to support our next generation of products and data initiatives.
- Use data to develop machine learning models that optimize decision making in Credit Risk, Fraud, Marketing, and Operations
- Implement data pipelines, new features, and algorithms that are critical to our production models
- Create scalable strategies to deploy and execute your models
- Write well designed, testable, efficient code
- Identify valuable data sources and automate collection processes.
- Undertake to preprocess of structured and unstructured data.
- Analyze large amounts of information to discover trends and patterns.
Requirements:
- 1+ years of experience in applied data science or engineering with a focus on machine learning
- Python expertise with good knowledge of machine learning libraries, tools, techniques, and frameworks (e.g. pandas, sklearn, xgboost, lightgbm, logistic regression, random forest classifier, gradient boosting regressor etc)
- strong quantitative and programming skills with a product-driven sensibility
Senior Computer Vision Developer
This position is not for freshers. We are looking for candidates with AI/ML/CV experience of at least 4 year in the industry.