NumPy Jobs in Bangalore (Bengaluru)
Data Scientist
Cubera is a data company revolutionizing big data analytics and Adtech through data share value principles wherein the users entrust their data to us. We refine the art of understanding, processing, extracting, and evaluating the data that is entrusted to us. We are a gateway for brands to increase their lead efficiency as the world moves towards web3.
What you’ll do?
- Build machine learning models, perform proof-of-concept, experiment, optimize, and deploy your models into production; work closely with software engineers to assist in productionizing your ML models.
- Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation, and serving.
- Research new and innovative machine learning approaches.
- Perform hands-on analysis and modeling of enormous data sets to develop insights that increase Ad Traffic and Campaign Efficacy.
- Collaborate with other data scientists, data engineers, product managers, and business stakeholders to build well-crafted, pragmatic data products.
- Actively take on new projects and constantly try to improve the existing models and infrastructure necessary for offline and online experimentation and iteration.
- Work with your team on ambiguous problem areas in existing or new ML initiatives
What are we looking for?
- Ability to write a SQL query to pull the data you need.
- Fluency in Python and familiarity with its scientific stack such as numpy, pandas, scikit learn, matplotlib.
- Experience in Tensorflow and/or R Modelling and/or PyTorch
- Ability to understand a business problem and translate, and structure it into a data science problem.
Job Category: Data Science
Job Type: Full Time
Job Location: Bangalore
We are looking for a motivated Software Development Engineer II, who would like to join our Engineering Team and be part of creating dynamic software applications for our clients across the globe. In this role, you will be responsible for writing scalable code, developing back-end components, and integrating user-facing elements with server-side logic in collaboration with front-end developers.
To be successful as an SDE II, you should possess in-depth knowledge of object-relational mapping, experience with server-side logic, and above-average knowledge of Python programming. Ultimately, as a top-class programmer, you should be able to design highly responsive web applications that perfectly meet the needs of the end users.
- Experience in developing full-stack web applications using Django Framework
- Good experience in modeling databases on relational databases such as Postgresql/MySQL
- Familiarity with ORM libraries and In-depth knowledge of writing custom queries
- Proven work experience in building REST APIs using Django Rest Framework
- Familiarity git versional control system
- Familiarity with basic AWS services such as S3, CloudFront, etc
- Good understanding of Javascript and Ajax requests
- Write a high-quality code in compliance with Pylint
- Good understanding of HTML, CSS, and Javascript
- Should have a basic knowledge of AWS services
- Should have good knowledge of Linux-based operating systems
- Good to have exposure to deploying the application to the Cloud
- Good to have knowledge of Multi-tier architecture
- Good to have a working knowledge of NoSQL databases such as MongoDB, DynamoDB, and ElasticSearch
- Good to be familiar with test framework tools like Behave, Pytest, PyUnit, etc.
- Good to have exposure to AWS services such as S3, SQS, SNS, etc.
- Good to have exposure to Celery and Redis
Roles and Responsibilities
- Build highly scalable, maintainable, and secure web applications using Django Framework
- Modeling data on relational databases such as Postgresql/MySQL
- Leverage ORM libraries to read and write data efficiently to the database.
- Should be able to write custom queries to retrieve data efficiently from the database
- Build, test and deploy REST APIs using Django Rest Framework
- Follow git best practices to manage the codebase in sync with product development
- Should be able to integrate AWS services into the application
- Write a high-quality code in compliance with Pylint
- Should be able to build, test and deploy Multi-tier web applications on the AWS Cloud.
- Responsible for modeling the data on NoSQL databases such as MongoDB, ElasticSearch
- Should be able to implement cache services such as Redis for high-performance applications.
- Should be able to identify production bugs and fix them as required.
- Continuously learn AWS Services and progress toward certification
Basic Qualifications
- B.E/B. Tech in Computer Science or equivalent
- 2-3 years of software development experience developing high-quality, large-scale consumer applications
- Problem-solving capability, excellent communication, and documentation skills
Perks/Benefits
- Exceptional mentorship.
- Immense learning opportunities on the latest technology and platforms.
- Opportunity to work on highly scalable consumer internet-facing applications.
- Make a visible impact in public-facing applications.
- Sponsorship for your AWS Certifications.
- Health Insurance Coverage.
- Accidental Coverage.
- Pay at par with industry standards and comprehensive rewards.
- Exposure to international brands and clients.
We are looking for a motivated Senior Software Development Engineer, who would like to join our Engineering Team and be part of creating dynamic software applications for our clients across the globe. In this role, you will be responsible for writing scalable code, developing back-end components, and integrating user-facing elements with server-side logic in collaboration with front-end developers.
To be successful as an SSDE, you should possess in-depth knowledge of data structures, algorithms, and object-relational mapping, experience with server-side logic, and above-average knowledge of Python programming. Ultimately, as a top-class programmer, you should be able to design highly responsive web applications that perfectly meet the needs of the end users.
Required Skills
- Experience in developing applications using Django or Flask framework.
- Strong in data structures, algorithms, and design patterns.
- Good experience in modeling databases on relational databases such as Postgresql/MySQL.
- Familiarity with ORM libraries and In-depth knowledge of writing custom queries.
- Proven work experience in building REST APIs using Flask or Django Rest Framework.
- Familiarity git the versional control system.
- Familiarity with basic AWS services such as S3, SQS, SNS, etc.
- Good understanding of Javascript will be an added advantage.
- Write a high-quality code in compliance with Pylint.
- Basic understanding of HTML, CSS, and Javascript.
- Should have a good knowledge of AWS services.
- Should have good knowledge of Linux-based operating systems.
- Should exposure to deploying the application to the Cloud.
- Good to have worked in Microservices architecture development.
- Good to have a working knowledge of NoSQL databases such as MongoDB, DynamoDB, and ElasticSearch.
- Good to be familiar with test framework tools like Behave, Pytest, PyUnit, etc.
- Good to have exposure to AWS services such as S3, SQS, SNS, etc.
- Having associate-level AWS Certification would be added advantage.
- Good to be well-versed in TDD, and BDD methodologies.
- Understanding of performance issues.
- Having knowledge of event-driven architecture would be added advantage.
Roles and Responsibilities
- Build highly scalable, maintainable, and secure APIs on Microservices using Cloud Native services.
- Responsible for developing applications using Django or Flask framework.
- Modeling data on relational databases such as Postgresql/MySQL
- Leverage ORM libraries to read and write data efficiently to the database.
- Should be able to write custom queries to retrieve data efficiently from the database.
- Build, test and deploy REST APIs using Flask or Django Rest Framework.
- Follow git best practices to manage the codebase in sync with product development.
- Should be able to integrate AWS services into the application.
- Write a high-quality code in compliance with Pylint.
- Should be able to build, test and deploy Microservices on the AWS Cloud.
- Responsible for modeling the data on NoSQL databases such as MongoDB, DynamoDB, and ElasticSearch.
- Should be able to implement cache services such as Redis for high-performance applications.
- Should be able to identify production bugs and fix them as required.
- Continuously learn AWS Services and progress toward certification.
Basic Qualifications
- B.E/B.Tech in Computer Science or equivalent.
- 3-5 years of software development experience developing high-quality, large-scale consumer applications.
- Problem-solving capability, excellent communication, and documentation skills.
Perks/Benefits
- Exceptional mentorship.
- Immense learning opportunities on the latest technology and platforms.
- Opportunity to work on highly scalable consumer internet-facing applications.
- Make a visible impact in public-facing applications.
- Sponsorship for your AWS Certifications.
- Health Insurance Coverage.
- Accidental Coverage.
- Pay at par with industry standards and comprehensive rewards.
- Exposure to international brands and clients.
- Responsible for setting up a scalable Data warehouse
- Building data pipeline mechanisms to integrate the data from various sources for all of Klub’s data.
- Setup data as a service to expose the needed data as part of APIs.
- Have a good understanding on how the finance data works.
- Standardize and optimize design thinking across the technology team.
- Collaborate with stakeholders across engineering teams to come up with short and long-term architecture decisions.
- Build robust data models that will help to support various reporting requirements for the business , ops and the leadership team.
- Participate in peer reviews , provide code/design comments.
- Own the problem and deliver to success.
Requirements:
- Overall 3+ years of industry experience
- Prior experience on Backend and Data Engineering systems
- Should have at least 1 + years of working experience in distributed systems
- Deep understanding on python tech stack with the libraries like Flask, scipy, numpy, pytest frameworks.
- Good understanding of Apache Airflow or similar orchestration tools.
- Good knowledge on data warehouse technologies like Apache Hive or similar. Good knowledge on Apache PySpark or similar.
- Good knowledge on how to build analytics services on the data for different reporting and BI needs.
- Good knowledge on data pipeline/ETL tools Hevo data or similar. Good knowledge on Trino / graphQL or similar query engine technologies.
- Deep understanding of concepts on Dimensional Data Models. Familiarity with RDBMS (MySQL/ PostgreSQL) , NoSQL (MongoDB/DynamoDB) databases & caching(redis or similar).
- Should be proficient in writing SQL queries.
- Good knowledge on kafka. Be able to write clean, maintainable code.
- Built a Data Warehouse from the scratch and set up a scalable data infrastructure.
- Prior experience in fintech would be a plus.
- Prior experience on data modelling.
Experienced in writing complex SQL select queries (window functions & CTE’s) with advanced SQL experience
Should be an individual contributor for initial few months based on project movement team will be aligned
Strong in querying logic and data interpretation
Solid communication and articulation skills
Able to handle stakeholders independently with less interventions of reporting manager
Develop strategies to solve problems in logical yet creative ways
Create custom reports and presentations accompanied by strong data visualization and storytelling
Job Profile: The job profile requires an inter-disciplinary approach to solving technical problems whilst also understanding the business requirements. Knowledge of the power and energy sector and a strong interest in solving national and global energy challenges is a must. This position will involve:
- Developing and maintaining SaaS products for the power and energy sector.
- Working with team members to handle day to day operations and product management tasks.
- Applying Machine Learning techniques to power and energy related data sets.
- Identification and characterization of complex relationships between multiple weather parameters and renewable energy generation.
- Providing inputs for new product development and improvements to existing products and services.
Requirements:
- Educational Qualification: B.Tech/B.E. Electrical and Electronics Engineering/Software Engineering/Mathematics and Computer Science.
- 1-2 years experience writing code independently in any object oriented programming language. Knowledge of python and its packages is highly preferable (numpy, pandas, keras, OpenCV, sci-kit).
- Demonstrable experience of project work (academic/internship/job)
- Ability to work on broad objectives and go from business requirements to code independently.
- Ability to work independently under minimal supervision and in a team;
- Preferable to have some understanding of power systems concepts (Power Generation Technologies, Transmission and Distribution, Load Flow, Renewable Energy Generation, etc.)
- Experience working with large data sets and manipulating data to find relationships between variables.
What we look for:
We are looking for an associate who will be doing data crunching from various sources and finding the key points from the data. Also help us to improve/build new pipelines as per the requests. Also, this associate will be helping us to visualize the data if required and find flaws in our existing algorithms.
Responsibilities:
- Work with multiple stakeholders to gather the requirements of data or analysis and take action on them.
- Write new data pipelines and maintain the existing pipelines.
- Person will be gathering data from various DB’s and will be finding the required metrics out of it.
Required Skills:
- Experience with python and Libraries like Pandas,and Numpy.
- Experience in SQL and understanding of NoSQL DB’s.
- Hands-on experience in Data engineering.
- Must have good analytical skills and knowledge of statistics.
- Understanding of Data Science concepts.
- Bachelor degree in Computer Science or related field.
- Problem-solving skills and ability to work under pressure.
Nice to have:
- Experience in MongoDB or any NoSql DB.
- Experience in ElasticSearch.
- Knowledge of Tableau, Power BI or any other visualization tool.
CustomerGlu is a low code interactive user engagement platform. We're backed by Techstars and top-notch VCs from the US like Better Capital and SmartStart.
As we begin building repeatability in our core product offering at CustomerGlu - building a high-quality data infrastructure/applications is emerging as a key requirement to further drive more ROI from our interactive engagement programs and to also get ideas for new campaigns.
Hence we are adding more team members to our existing data team and looking for a Data Engineer.
Responsibilities
- Design and build a high-performing data platform that is responsible for the extraction, transformation, and loading of data.
- Develop low-latency real-time data analytics and segmentation applications.
- Setup infrastructure for easily building data products on top of the data platform.
- Be responsible for logging, monitoring, and error recovery of data pipelines.
- Build workflows for automated scheduling of data transformation processes.
- Able to lead a team
Requirements
- 3+ years of experience and ability to manage a team
- Experience working with databases like MongoDB and DynamoDB.
- Knowledge of building batch data processing applications using Apache Spark.
- Understanding of how backend services like HTTP APIs and Queues work.
- Write good quality, maintainable code in one or more programming languages like Python, Scala, and Java.
- Working knowledge of version control systems like Git.
Bonus Skills
- Experience in real-time data processing using Apache Kafka or AWS Kinesis.
- Experience with AWS tools like Lambda and Glue.
A Reputed Analytics Consulting Company in Data Science field
Job Title : Analyst / Sr. Analyst – Data Science Developer - Python
Exp : 2 to 5 yrs
Loc : B’lore / Hyd / Chennai
NP: Candidate should join us in 2 months (Max) / Immediate Joiners Pref.
About the role:
We are looking for an Analyst / Senior Analyst who works in the analytics domain with a strong python background.
Desired Skills, Competencies & Experience:
• • 2-4 years of experience in working in the analytics domain with a strong python background. • • Visualization skills in python with plotly, matplotlib, seaborn etc. Ability to create customized plots using such tools. • • Ability to write effective, scalable and modular code. Should be able to understand, test and debug existing python project modules quickly and contribute to that. • • Should be familiarized with Git workflows.
Good to Have: • • Familiarity with cloud platforms like AWS, AzureML, Databricks, GCP etc. • • Understanding of shell scripting, python package development. • • Experienced with Python data science packages like Pandas, numpy, sklearn etc. • • ML model building and evaluation experience using sklearn.
|
- Writing efficient, reusable, testable, and scalable code
- Understanding, analyzing, and implementing – Business needs, feature modification requests, conversion into software components
- Integration of user-oriented elements into different applications, data storage solutions
- Developing – Backend components to enhance performance and receptiveness, server-side logic, and platform, statistical learning models, highly responsive web applications
- Designing and implementing – High availability and low latency applications, data protection and security features
- Performance tuning and automation of application
- Working with Python libraries like Pandas, NumPy, etc.
- Creating predictive models for AI and ML-based features
- Keeping abreast with the latest technology and trends
- Fine-tune and develop AI/ML-based algorithms based on results
Technical Skills-
Good proficiency in,
- Python frameworks like Django, etc.
- Web frameworks and RESTful APIs
- Core Python fundamentals and programming
- Code packaging, release, and deployment
- Database knowledge
- Circles, conditional and control statements
- Object-relational mapping
- Code versioning tools like Git, Bitbucket
Fundamental understanding of,
- Front-end technologies like JS, CSS3 and HTML5
- AI, ML, Deep Learning, Version Control, Neural networking
- Data visualization, statistics, data analytics
- Design principles that are executable for a scalable app
- Creating predictive models
- Libraries like Tensorflow, Scikit-learn, etc
- Multi-process architecture
- Basic knowledge about Object Relational Mapper libraries
- Ability to integrate databases and various data sources into a unified system
- Basic knowledge about Object Relational Mapper libraries
- Ability to integrate databases and various data sources into a unified system
Responsibilities Description:
Responsible for the development and implementation of machine learning algorithms and techniques to solve business problems and optimize member experiences. Primary duties may include are but not limited to: Design machine learning projects to address specific business problems determined by consultation with business partners. Work with data-sets of varying degrees of size and complexity including both structured and unstructured data. Piping and processing massive data-streams in distributed computing environments such as Hadoop to facilitate analysis. Implements batch and real-time model scoring to drive actions. Develops machine learning algorithms to build customized solutions that go beyond standard industry tools and lead to innovative solutions. Develop sophisticated visualization of analysis output for business users.
Experience Requirements:
BS/MA/MS/PhD in Statistics, Computer Science, Mathematics, Machine Learning, Econometrics, Physics, Biostatistics or related Quantitative disciplines. 2-4 years of experience in predictive analytics and advanced expertise with software such as Python, or any combination of education and experience which would provide an equivalent background. Experience in the healthcare sector. Experience in Deep Learning strongly preferred.
Required Technical Skill Set:
- Full cycle of building machine learning solutions,
o Understanding of wide range of algorithms and their corresponding problems to solve
o Data preparation and analysis
o Model training and validation
o Model application to the problem
- Experience using the full open source programming tools and utilities
- Experience in working in end-to-end data science project implementation.
- 2+ years of experience with development and deployment of Machine Learning applications
- 2+ years of experience with NLP approaches in a production setting
- Experience in building models using bagging and boosting algorithms
- Exposure/experience in building Deep Learning models for NLP/Computer Vision use cases preferred
- Ability to write efficient code with good understanding of core Data Structures/algorithms is critical
- Strong python skills following software engineering best practices
- Experience in using code versioning tools like GIT, bit bucket
- Experience in working in Agile projects
- Comfort & familiarity with SQL and Hadoop ecosystem of tools including spark
- Experience managing big data with efficient query program good to have
- Good to have experience in training ML models in tools like Sage Maker, Kubeflow etc.
- Good to have experience in frameworks to depict interpretability of models using libraries like Lime, Shap etc.
- Experience with Health care sector is preferred
- MS/M.Tech or PhD is a plus
Job Description
JD - Python Developer
Responsibilities
- Design and implement software features based on requirements
- Architect new features for products or tools
- Articulate and document designs as needed
- Prepare and present technical training
- Provide estimates and status for development tasks
- Work effectively in a highly collaborative and iterative development process
- Work effectively with the Product, QA, and DevOps team.
- Troubleshoot issues and correct defects when required
- Build unit and integration tests that assure correct behavior and increase the maintainability of the code base
- Apply dev-ops and automation as needed
- Commit to continuous learning and enhancement of skills and product knowledge
Required Qualifications
- Minimum of 5 years of relevant experience in development and design
- Proficiency in Python and extensive knowledge of the associated libraries Extensive experience with Python data science libraries: TensorFlow, NumPy, SciPy, Pandas, etc.
- Strong skills in producing visuals with algorithm results
- Strong SQL and working knowledge of Microsoft SQL Server and other data storage technologies
- Strong web development skills Advance knowledge with ORM and data access patterns
- Experienced working using Scrum and Agile methodologies
- Excellent debugging and troubleshooting skills
- Deep knowledge of DevOps practices and cloud services
- Strong collaboration and verbal and written communication skills
- Self-starter, detail-oriented, organized, and thorough
- Strong interpersonal skills and a team-oriented mindset
- Fast learner and creative capacity for developing innovative solutions to complex problems
Skills
PYTHON, SQL, TensorFlow, NumPy, SciPy, Pandas
Develop state of the art algorithms in the fields of Computer Vision, Machine Learning and Deep Learning.
Provide software specifications and production code on time to meet project milestones Qualifications
BE or Master with 3+ years of experience
Must have Prior knowledge and experience in Image processing and Video processing • Should have knowledge of object detection and recognition
Must have experience in feature extraction, segmentation and classification of the image
Face detection, alignment, recognition, tracking & attribute recognition
Excellent Understanding and project/job experience in Machine learning, particularly in areas of Deep Learning – CNN, RNN, TENSORFLOW, KERAS etc.
Real world expertise in deep learning- applied to Computer Vision problems • Strong foundation in Mathematics
Strong development skills in Python
Must have worked upon Vision and deep learning libraries and frameworks such as Opencv, Tensorflow, Pytorch, keras
Quick learner of new technologies
Ability to work independently as well as part of a team
Knowledge of working closely with Version Control(GIT)
- Does analytics to extract insights from raw historical data of the organization.
- Generates usable training dataset for any/all MV projects with the help of Annotators, if needed.
- Analyses user trends, and identifies their biggest bottlenecks in Hammoq Workflow.
- Tests the short/long term impact of productized MV models on those trends.
- Skills - Numpy, Pandas, SPARK, APACHE SPARK, PYSPARK, ETL mandatory.
- In-depth knowledge in Core Python with Django building end to endapplications development.
- Experience in Web technologies-HTML, CSS, Javascript.
- Database - SQL Server/Postgres/ NoSQL database.
- Good understanding of Algorithms, data structures.
- Knowledge in ORM (Object Relational Mapper) libraries.
- Experience in integrating multiple data sources and databases into onesystem.
- Knowledge in REST / SOAP API
- Knowledge in version control tools like Git
- Experience with various cloud technologies.