50+ pandas Jobs in India
Apply to 50+ pandas Jobs on CutShort.io. Find your next job, effortlessly. Browse pandas Jobs and apply today!
About Us:
Optimo Capital is a newly established NBFC founded by Prashant Pitti, who is also a co-founder of EaseMyTrip (a billion-dollar listed startup that grew profitably without any funding).
Our mission is to serve the underserved MSME businesses with their credit needs in India. With less than 15% of MSMEs having access to formal credit, we aim to bridge this credit gap through a phygital model (physical branches + digital decision-making).
As a technology and data-first company, tech lovers and data enthusiasts play a crucial role in building the analytics & tech at Optimo that helps the company thrive.
What We Offer:
Join our dynamic startup team as a Senior Data Analyst and play a crucial role in core data analytics projects involving credit risk, lending strategy, credit underwriting features analytics, collections, and portfolio management. The analytics team at Optimo works closely with the Credit & Risk departments, helping them make data-backed decisions.
This is an exceptional opportunity to learn, grow, and make a significant impact in a fast-paced startup environment. We believe that the freedom and accountability to make decisions in analytics and technology bring out the best in you and help us build the best for the company. This environment offers you a steep learning curve and an opportunity to experience the direct impact of your analytics contributions. Along with this, we offer industry-standard compensation.
What We Look For:
We are looking for individuals with a strong analytical mindset and a fundamental understanding of the lending industry, primarily focused on credit risk. We value not only your skills but also your attitude and hunger to learn, grow, lead, and thrive, both individually and as part of a team. We encourage you to take on challenges, bring in new ideas, implement them, and build the best analytics systems. Your willingness to put in the extra hours to build the best will be recognized.
Skills/Requirements:
- Credit Risk & Underwriting: Fundamental knowledge of credit risk and underwriting processes is mandatory. Experience in any lending financial institution is a must. A thorough understanding of all the features evaluated in the underwriting process like credit report info, bank statements, GST data, demographics, etc., is essential.
- Analytics (Python): Excellent proficiency in Python - Pandas and Numpy. A strong analytical mindset and the ability to extract actionable insights from any analysis are crucial. The ability to convert the given problem statements into actionable analytics tasks and frame effective approaches to tackle them is highly desirable.
- Good to have but not mandatory: REST APIs: A fundamental understanding of APIs and previous experience or projects related to API development or integrations. Git: Proficiency in version control systems, particularly Git. Experience in collaborative projects using Git is highly valued.
What You'll Be Working On:
- Analyze data from different data sources, extract information, and create action items to tackle the given open-ended problems.
- Build strong analytics systems and dashboards that provide easy access to data and insights, including the current status of the company, portfolio health, static pool, branch-wise performance, TAT (turnaround time) monitoring, and more.
- Assist the credit and risk team with insights and action items, helping them make data-backed decisions and fine-tune the credit policy (high involvement in the credit and underwriting process).
- Work on different rule engines that automate the underwriting process end-to-end.
Other Requirements:
- Availability for full-time work in Bangalore. Immediate joiners are preferred.
- Strong passion for analytics and problem-solving.
- At least 1 year of industry experience in an analytics role, specifically in a lending institution, is a must.
- Self-motivated and capable of working both independently and collaboratively.
If you are ready to embark on an exciting journey of growth, learning, and innovation, apply now to join our pioneering team in Bangalore.
We are seeking a talented UiPath Developer with experience in Python, SQL, Pandas, and NumPy to join our dynamic team. The ideal candidate will have hands-on experience developing RPA workflows using UiPath, along with the ability to automate processes through scripting, data manipulation, and database queries.
This role offers the opportunity to collaborate with cross-functional teams to streamline operations and build innovative automation solutions.
Key Responsibilities:
- Design, develop, and implement RPA workflows using UiPath.
- Build and maintain Python scripts to enhance automation capabilities.
- Utilize Pandas and NumPy for data extraction, manipulation, and transformation within automation processes.
- Write optimized SQL queries to interact with databases and support automation workflows.
Skills and Qualifications:
- 2 to 5 years of experience in UiPath development.
- Strong proficiency in Python and working knowledge of Pandas and NumPy.
- Good experience with SQL for database interactions.
- Ability to design scalable and maintainable RPA solutions using UiPath.
at Rayvat Outsourcing
Job Title: (Generative AI Engineer Specialist in Deep Learning)
Location: Gandhinagar, Ahmedabad, Gujarat
Company: Rayvat Outsourcing
Salary: Upto 2,50,000/- per annum
Job Type: Full-Time
Experience: 0 to 1 Year
Job Overview:
We are seeking a talented and enthusiastic Generative AI Engineer to join our team. As an Intermediate-level engineer, you will be responsible for developing and deploying state-of-the-art generative AI models to solve complex problems and create innovative solutions. You will collaborate with cross-functional teams, working on a variety of projects that range from natural language processing (NLP) to image generation and multimodal AI systems. The ideal candidate has hands-on experience with machine learning models, deep learning techniques, and a passion for artificial intelligence.
Key Responsibilities:
· Develop, fine-tune, and deploy generative AI models using frameworks such as GPT, BERT, DALL·E, Stable Diffusion, etc.
· Research and implement cutting-edge machine learning algorithms in NLP, computer vision, and multimodal systems.
· Collaborate with data scientists, ML engineers, and product teams to integrate AI solutions into products and platforms.
· Create APIs and pipelines to deploy models in production environments, ensuring scalability and performance.
· Analyze large datasets to identify key features, patterns, and use cases for model training.
· Debug and improve existing models by evaluating performance metrics and applying optimization techniques.
· Stay up-to-date with the latest advancements in AI, deep learning, and generative models to continually enhance the solutions.
· Document technical workflows, including model architecture, training processes, and performance reports.
· Ensure ethical use of AI, adhering to guidelines around AI fairness, transparency, and privacy.
Qualifications:
· Bachelor’s/Master’s degree in Computer Science, Machine Learning, Data Science, or a related field.
· 2-4 years of hands-on experience in machine learning and AI development, particularly in generative AI.
· Proficiency with deep learning frameworks such as TensorFlow, PyTorch, or similar.
· Experience with NLP models (e.g., GPT, BERT) or image-generation models (e.g., GANs, diffusion models).
· Strong knowledge of Python and libraries like NumPy, Pandas, scikit-learn, etc.
· Experience with cloud platforms (e.g., AWS, GCP, Azure) for AI model deployment and scaling.
· Familiarity with APIs, RESTful services, and microservice architectures.
· Strong problem-solving skills and the ability to troubleshoot and optimize AI models.
· Good understanding of data preprocessing, feature engineering, and handling large datasets.
· Excellent written and verbal communication skills, with the ability to explain complex concepts clearly.
Preferred Skills:
· Experience with multimodal AI systems (combining text, image, and/or audio data).
· Familiarity with ML Ops and CI/CD pipelines for deploying machine learning models.
· Experience in A/B testing and performance monitoring of AI models in production.
· Knowledge of ethical AI principles and AI governance.
What We Offer:
· Competitive salary and benefits package.
· Opportunities for professional development and growth in the rapidly evolving AI field.
· Collaborative and dynamic work environment, with access to cutting-edge AI technologies.
· Work on impactful projects with real-world applications.
We are seeking a Data Engineer ( Snowflake, Bigquery, Redshift) to join our team. In this role, you will be responsible for the development and maintenance of fault-tolerant pipelines, including multiple database systems.
Responsibilities:
- Collaborate with engineering teams to create REST API-based pipelines for large-scale MarTech systems, optimizing for performance and reliability.
- Develop comprehensive data quality testing procedures to ensure the integrity and accuracy of data across all pipelines.
- Build scalable dbt models and configuration files, leveraging best practices for efficient data transformation and analysis.
- Partner with lead data engineers in designing scalable data models.
- Conduct thorough debugging and root cause analysis for complex data pipeline issues, implementing effective solutions and optimizations.
- Follow and adhere to group's standards such as SLAs, code styles, and deployment processes.
- Anticipate breaking changes to implement backwards compatibility strategies regarding API schema changesAssist the team in monitoring pipeline health via observability tools and metrics.
- Participate in refactoring efforts as platform application needs evolve over time.
Requirements:
- Bachelor's degree or higher in Computer Science, Engineering, Mathematics, or a related field.
- 3+ years of professional experience with a cloud database such as Snowflake, Bigquery, Redshift.
- +1 years of professional experience with dbt (cloud or core).
- Exposure to various data processing technologies such as OLAP and OLTP and their applications in real-world scenarios.
- Exposure to work cross-functionally with other teams such as Product, Customer Success, Platform Engineering.
- Familiarity with orchestration tools such as Dagster/Airflow.
- Familiarity with ETL/ELT tools such as dltHub/Meltano/Airbyte/Fivetran and DBT.
- High intermediate to advanced SQL skills (comfort with CTEs, window functions).
- Proficiency with Python and related libraries (e.g., pandas, sqlalchemy, psycopg2) for data manipulation, analysis, and automation.
Benefits:
- Work Location: Remote
- 5 days working
You can apply directly through the link:https://zrec.in/e9578?source=CareerSite
Explore our Career Page for more such jobs : careers.infraveo.com
Job Description: AI/ML Engineer
Location: Bangalore (On-site)
Experience: 2+ years of relevant experience
About the Role:
We are seeking a skilled and passionate AI/ML Engineer to join our team in Bangalore. The ideal candidate will have over two years of experience in developing, deploying, and maintaining AI and machine learning models. As an AI/ML Engineer, you will work closely with our data science team to build innovative solutions and deploy them in a production environmen
Key Responsibilities:
- Develop, implement, and optimize machine learning models.
- Perform data manipulation, exploration, and analysis to derive actionable insights.
- Use advanced computer vision techniques, including YOLO and other state-of-the-art methods, for image processing and analysis.
- Collaborate with software developers and data scientists to integrate AI/ML solutions into the company's applications and products.
- Design, test, and deploy scalable machine learning solutions using TensorFlow, OpenCV, and other related technologies.
- Ensure the efficient storage and retrieval of data using SQL and data manipulation libraries such as pandas and NumPy.
- Contribute to the development of backend services using Flask or Django for deploying AI models.
- Manage code using Git and containerize applications using Docker when necessary.
- Stay updated with the latest advancements in AI/ML and integrate them into existing projects.
Required Skills:
- Proficiency in Python and its associated libraries (NumPy, pandas).
- Hands-on experience with TensorFlow for building and training machine learning models.
- Strong knowledge of linear algebra and data augmentation techniques.
- Experience with computer vision libraries like OpenCV and frameworks like YOLO.
- Proficiency in SQL for database management and data extraction.
- Experience with Flask for backend development.
- Familiarity with version control using Git.
Optional Skills:
- Experience with PyTorch, Scikit-learn, and Docker.
- Familiarity with Django for web development.
- Knowledge of GPU programming using CuPy and CUDA.
- Understanding of parallel processing techniques.
Qualifications:
- Bachelor's degree in Computer Science, Engineering, or a related field.
- Demonstrated experience in AI/ML, with a portfolio of past projects.
- Strong analytical and problem-solving skills.
- Excellent communication and teamwork skills.
Why Join Us?
- Opportunity to work on cutting-edge AI/ML projects.
- Collaborative and dynamic work environment.
- Competitive salary and benefits.
- Professional growth and development opportunities.
If you're excited about using AI/ML to solve real-world problems and have a strong technical background, we'd love to hear from you!
Apply now to join our growing team and make a significant impact!
Who are we looking for?
We are looking for a Senior Data Scientist, who will design and develop data-driven solutions using state-of-the-art methods. You should be someone with strong and proven experience in working on data-driven solutions. If you feel you’re enthusiastic about transforming business requirements into insightful data-driven solutions, you are welcome to join our fast-growing team to unlock your best potential.
Job Summary
- Supporting company mission by understanding complex business problems through data-driven solutions.
- Designing and developing machine learning pipelines in Python and deploying them in AWS/GCP, ...
- Developing end-to-end ML production-ready solutions and visualizations.
- Analyse large sets of time-series industrial data from various sources, such as production systems, sensors, and databases to draw actionable insights and present them via custom dashboards.
- Communicating complex technical concepts and findings to non-technical stakeholders of the projects
- Implementing the prototypes using suitable statistical tools and artificial intelligence algorithms.
- Preparing high-quality research papers and participating in conferences to present and report experimental results and research findings.
- Carrying out research collaborating with internal and external teams and facilitating review of ML systems for innovative ideas to prototype new models.
Qualification and experience
- B.Tech/Masters/Ph.D. in computer science, electrical engineering, mathematics, data science, and related fields.
- 5+ years of professional experience in the field of machine learning, and data science.
- Experience with large-scale Time-series data-based production code development is a plus.
Skills and competencies
- Familiarity with Docker, and ML Libraries like PyTorch, sklearn, pandas, SQL, and Git is a must.
- Ability to work on multiple projects. Must have strong design and implementation skills.
- Ability to conduct research based on complex business problems.
- Strong presentation skills and the ability to collaborate in a multi-disciplinary team.
- Must have programming experience in Python.
- Excellent English communication skills, both written and verbal.
Benefits and Perks
- Culture of innovation, creativity, learning, and even failure, we believe in bringing out the best in you.
- Progressive leave policy for effective work-life balance.
- Get mentored by highly qualified internal resource groups and opportunity to avail industry-driven mentorship program, as we believe in empowering people.
- Multicultural peer groups and supportive workplace policies.
- Work from beaches, hills, mountains, and many more with the yearly workcation program; we believe in mixing elements of vacation and work.
Hiring Process
- Call with Talent Acquisition Team: After application screening, a first-level screening with the talent acquisition team to understand the candidate's goals and alignment with the job requirements.
- First Round: Technical round 1 to gauge your domain knowledge and functional expertise.
- Second Round: In-depth technical round and discussion about the departmental goals, your role, and expectations.
- Final HR Round: Culture fit round and compensation discussions.
- Offer: Congratulations you made it!
If this position sparked your interest, apply now to initiate the screening process.
Job Description:-
Designation : Python Developer
Location : Indore | WFO
Skills : Python, Django, Flask, Numpy, Panda, RESTful APIs, AWS.
Python Developer Responsibilities:-
1. Coordinating with development teams to determine application requirements.
2. Writing scalable code using Python programming language.
3. Testing and debugging applications.
4. Developing back-end components.
5. Integrating user-facing elements using server-side logic.
6. Assessing and prioritizing client feature requests.
7. Integrating data storage solutions.
8. Coordinating with front-end developers.
9. Reprogramming existing databases to improve functionality.
10. Developing digital tools to monitor online traffic.
Python Developer Requirements:-
1. Bachelor's degree in computer science, computer engineering, or related field.
2. At Least 3+ years of experience as a Python developer.
3. Expert knowledge of Python and related frameworks including Django and Flask.
4. A deep understanding and multi-process architecture and the threading limitations of Python.
5. Familiarity with server-side templating languages including Jinja 2 and Mako.
6. Ability to integrate multiple data sources into a single system.
7. Familiarity with testing tools.
8. Ability to collaborate on projects and work independently when required.
Skills - Python, Django, Flask, Numpy, Panda, RESTful APIs, AWS.
Role / Designation : Python Developer
Location: Bangalore, India
Skills : Certification: AI900, AZ900 Technical or Key Skills: Primary Skills Python, Flask, Web development. Knowledge on Azure Cloud, Application development, API development
Profile: IT Professional with 6 +years of experience in
• Hands on experience Python libraries such as Pandas, Numpy , OpenPyxl
• Hands on experience of Python libraries with multiple document types (excel, csv, pdf and images)
• Working with huge data sets, data analysis and provide ETL and EDA analysis reports.
• 5+ years’ experience in any of the programming languages like Python(mandatory), Java and C/C++.
• Must have experience in Azure Paas, Iaas services like Azure function app, Kubernetes services, storage account, keyvault , etc
• Experience with DB such as SQL,NoSQL
• Develop methodologies for the data analysis, data extraction, data transformations, preprocessing of data.
• Experience in deploying applications, packages in Azure environment.
• Writing scalable code using Python programming language.
• Testing and debugging applications.
• Developing back-end components.
• Integrating user-facing elements using server-side logic.
• Excellent problem solving/analytical skills and complex troubleshooting methods.
• Ability to work through ambiguous situations.
• Excellent presentation, verbal, and written communication skills. Education: BE/BTech/BSc
Certification: AI900, AZ900 Technical or Key Skills: Primary Skills Python, Flask, Web development. Knowledge on Azure Cloud, Application development, API development
REQUIREMENTS
Core skills:
● Technical Experience (Must have) - working knowledge of any visualization tool
(Metabase,Tableau, QlikSense, Looker, Superset, Power BI etc), strong SQL & Python,
Excel/Gsheet
● Product Knowledge (Must have)- Knowledge of Google Analytics/ BigQuery or
Mixpanel, must have worked on A/B testing & events writing.Must be familiar with
product (app,website) data and have good product sense
● Analytical Thinking: Outstanding analytical and problem-solving skills. ability to break
the problem statement during execution.
Core Experience:
● Overall experience of 2-5 years in the analytics domain
● He/she should have hands-on experience in the analytics domain around making Data
Story Dashboards, doing RCA & analyzing data.
● Understand and hands-on experience of the Product i.e funnels, A/B experiment etc.
● Ability to define the right metric for a specific product feature or experiment & do the
impact analysis.
● Ability to explain complex data insights to a wider audience & tell us the next steps &
recommendations
● Experience in analyzing, exploring, and mining large data sets to support reporting and
ad-hoc analysis
● Strong attention to detail and accuracy of output.
· The Objective:
You will play a crucial role in designing, implementing, and maintaining our data infrastructure, run tests and update the systems
· Job function and requirements
o Expert in Python, Pandas and Numpy with knowledge of Python web Framework such as Django and Flask.
o Able to integrate multiple data sources and databases into one system.
o Basic understanding of frontend technologies like HTML, CSS, JavaScript.
o Able to build data pipelines.
o Strong unit test and debugging skills.
o Understanding of fundamental design principles behind a scalable application
o Good understanding of RDBMS databases among Mysql or Postgresql.
o Able to analyze and transform raw data.
· About us
Mitibase helps companies find warm prospects every month that are most relevant, and then helps their team to act on those with automation. We do so by automatically tracking key accounts and contacts for job changes and relationships triggers and surfaces them as warm leads in your sales pipeline.
Job Description:
Machine Learning / AI Engineer (with 3+ years of experience)
We are seeking a highly skilled and passionate Machine Learning / AI Engineer to join our newly established data science practice area. In this role, you will primarily focus on working with Large Language Models (LLMs) and contribute to building generative AI applications. This position offers an exciting opportunity to shape the future of AI technology while charting an interesting career path within our organization.
Responsibilities:
1. Develop and implement machine learning models: Utilize your expertise in machine learning and artificial intelligence to design, develop, and deploy cutting-edge models, with a particular emphasis on Large Language Models (LLMs). Apply your knowledge to solve complex problems and optimize performance.
2. Building generative AI applications: Collaborate with cross-functional teams to conceptualize, design, and build innovative generative AI applications. Work on projects that push the boundaries of AI technology and deliver impactful solutions to real-world problems.
3. Data preprocessing and analysis: Collect, clean, and preprocess large volumes of data for training and evaluation purposes. Conduct exploratory data analysis to gain insights and identify patterns that can enhance the performance of AI models.
4. Model training and evaluation: Develop robust training pipelines for machine learning models, incorporating best practices in model selection, feature engineering, and hyperparameter tuning. Evaluate model performance using appropriate metrics and iterate on the models to improve accuracy and efficiency.
5. Research and stay up to date: Keep abreast of the latest advancements in machine learning, natural language processing, and generative AI. Stay informed about industry trends, emerging techniques, and open-source libraries, and apply relevant findings to enhance the team's capabilities.
6. Collaborate and communicate effectively: Work closely with a multidisciplinary team of data scientists, software engineers, and domain experts to drive AI initiatives. Clearly communicate complex technical concepts and findings to both technical and non-technical stakeholders.
7. Experimentation and prototyping: Explore novel ideas, experiment with new algorithms, and prototype innovative solutions. Foster a culture of innovation and contribute to the continuous improvement of AI methodologies and practices within the organization.
Requirements:
1. Education: Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Relevant certifications in machine learning, deep learning, or AI are a plus.
2. Experience: A minimum of 3+ years of professional experience as a Machine Learning / AI Engineer, with a proven track record of developing and deploying machine learning models in real-world applications.
3. Strong programming skills: Proficiency in Python and experience with machine learning frameworks (e.g., TensorFlow, PyTorch) and libraries (e.g., scikit-learn, pandas). Experience with cloud platforms (e.g., AWS, Azure, GCP) for model deployment is preferred.
4. Deep-learning expertise: Strong understanding of deep learning architectures (e.g., convolutional neural networks, recurrent neural networks, transformers) and familiarity with Large Language Models (LLMs) such as GPT-3, GPT-4, or equivalent.
5. Natural Language Processing (NLP) knowledge: Familiarity with NLP techniques, including tokenization, word embeddings, named entity recognition, sentiment analysis, text classification, and language generation.
6. Data manipulation and preprocessing skills: Proficiency in data manipulation using SQL and experience with data preprocessing techniques (e.g., cleaning, normalization, feature engineering). Familiarity with big data tools (e.g., Spark) is a plus.
7. Problem-solving and analytical thinking: Strong analytical and problem-solving abilities, with a keen eye for detail. Demonstrated experience in translating complex business requirements into practical machine learning solutions.
8. Communication and collaboration: Excellent verbal and written communication skills, with the ability to explain complex technical concepts to diverse stakeholders
- Creating and managing ETL/ELT pipelines based on requirements
- Build PowerBI dashboards and manage datasets needed.
- Work with stakeholders to identify data structures needed for future and perform any transformations including aggregations.
- Build data cubes for real-time visualisation needs and CXO dashboards.
Required Tech Skills
- Microsoft PowerBI & DAX
- Python, Pandas, PyArrow, Jupyter Noteboks, ApacheSpark
- Azure Synapse, Azure DataBricks, Azure HDInsight, Azure Data Factory
An 8 year old IT Services and consulting company.
CTC Budget: 35-55LPA
Location: Hyderabad (Remote after 3 months WFO)
Company Overview:
An 8-year-old IT Services and consulting company based in Hyderabad providing services in maximizing product value while delivering rapid incremental innovation, possessing extensive SaaS company M&A experience including 20+ closed transactions on both the buy and sell sides. They have over 100 employees and looking to grow the team.
- 6 plus years of experience as a Python developer.
- Experience in web development using Python and Django Framework.
- Experience in Data Analysis and Data Science using Pandas, Numpy and Scifi-Kit - (GTH)
- Experience in developing User Interface using HTML, JavaScript, CSS.
- Experience in server-side templating languages including Jinja 2 and Mako
- Knowledge into Kafka and RabitMQ (GTH)
- Experience into Docker, Git and AWS
- Ability to integrate multiple data sources into a single system.
- Ability to collaborate on projects and work independently when required.
- DB (MySQL, Postgress, SQL)
Selection Process: 2-3 Interview rounds (Tech, VP, Client)
From building entire infrastructures or platforms to solving complex IT challenges, Cambridge Technology helps businesses accelerate their digital transformation and become AI-first businesses. With over 20 years of expertise as a technology services company, we enable our customers to stay ahead of the curve by helping them figure out the perfect approach, solutions, and ecosystem for their business. Our experts help customers leverage the right AI, big data, cloud solutions, and intelligent platforms that will help them become and stay relevant in a rapidly changing world.
No Of Positions: 1
Skills required:
- The ideal candidate will have a bachelor’s degree in data science, statistics, or a related discipline with 4-6 years of experience, or a master’s degree with 4-6 years of experience. A strong candidate will also possess many of the following characteristics:
- Strong problem-solving skills with an emphasis on achieving proof-of-concept
- Knowledge of statistical techniques and concepts (regression, statistical tests, etc.)
- Knowledge of machine learning and deep learning fundamentals
- Experience with Python implementations to build ML and deep learning algorithms (e.g., pandas, numpy, sci-kit-learn, Stats Models, Keras, PyTorch, etc.)
- Experience writing and debugging code in an IDE
- Experience using managed web services (e.g., AWS, GCP, etc.)
- Strong analytical and communication skills
- Curiosity, flexibility, creativity, and a strong tolerance for ambiguity
- Ability to learn new tools from documentation and internet resources.
Roles and responsibilities :
- You will work on a small, core team alongside other engineers and business leaders throughout Cambridge with the following responsibilities:
- Collaborate with client-facing teams to design and build operational AI solutions for client engagements.
- Identify relevant data sources for data wrangling and EDA
- Identify model architectures to use for client business needs.
- Build full-stack data science solutions up to MVP that can be deployed into existing client business processes or scaled up based on clear documentation.
- Present findings to teammates and key stakeholders in a clear and repeatable manner.
Experience :
2 - 14 Yrs
· 4+ years of experience as a Python Developer.
· Good Understanding of Object-Oriented Concepts and Solid principles.
· Good Understanding in Programming and analytical skills.
· Should have hands on experience in AWS Cloud Service like S3, Lambda functions Knowledge. (Must Have)
· Should have experience Working with large datasets (Must Have)
· Proficient in using NumPy, Pandas. (Must Have)
· Should have hands on experience on Mysql (Must Have)
· Should have experience in debugging Python applications (Must have)
· Knowledge of working on Flask.
· Knowledge of object-relational mapping (ORM).
· Able to integrate multiple data sources and databases into one system
· Proficient understanding of code versioning tools such as Git, SVN
· Strong at problem-solving and logical abilities
· Sound knowledge of Front-end technologies like HTML5, CSS3, and JavaScript
· Strong commitment and desire to learn and grow.
We are seeking a skilled and motivated Python Full Stack Developer to join us. The ideal candidate will have experience with Python, JavaScript and its related technologies, as well as a passion for developing efficient and scalable software solutions.
Responsibilities:
- Design and develop high-quality, scalable applications using Python, Django, DRF, FastAPI and JavaScript frameworks such as React or Vue.js
- Analyze business requirements and develop software solutions to meet those needs
- Write clean, maintainable, and efficient code
- Test software solutions to ensure they meet performance, scalability, and reliability requirements
- Debug and troubleshoot issues in the software
- Stay up-to-date with emerging trends and technologies in Python development
Qualifications:
- Bachelor's or Master's degree in Computer Science or related field
- At least 2 years of experience in developing applications using Python, Django, DRF or FastAPI.
- At least 2 years of experience in using front-end JavaScript frameworks such as Jquery, React or Vue.js
- Experience with database technologies such as PostgreSQL and MongoDB
- Experience with AWS or other cloud platforms
- Ability to write clean and maintainable code
- Strong analytical and problem-solving skills
- Excellent written and verbal communication skills
Nice to have:
- Knowledge of Trading in stocks, forex, futures etc.
- Knowledge of automated trading
- Experience with Different Trading Platforms
We offer:
- Competitive salary
- Flexible working hours
Job Types: Full-time, Regular / Permanent
Salary: ₹400,000.00 - ₹10,000.00 per year
Benefits:
- Flexible schedule
Schedule:
- Day shift
- Monday to Friday
Supplemental pay types:
- Overtime pay
- Yearly bonus
- Performance-based bonus
Ability to commute/relocate:
- Mondeal heights, SG Highway, Ahmedabad - 380015, Gujarat: Reliably commute or planning to relocate before starting work (Required)
Education:
- Bachelor's (Preferred)
Experience:
- Python: 1-3 years (Required)
- JavaScript: 1-3 years (Required)
Job Description:
- 3 - 4 years of hands-on Python programming & libraries like PyData, Pandas
- Exposure to Mongo DB
- Experience in writing Unit Test cases
- Expertise in writing medium/advanced SQL Database queries
- Strong Verbal/Written communication skills
- Ability to work with onsite counterpart teams
Data Scientist
Cubera is a data company revolutionizing big data analytics and Adtech through data share value principles wherein the users entrust their data to us. We refine the art of understanding, processing, extracting, and evaluating the data that is entrusted to us. We are a gateway for brands to increase their lead efficiency as the world moves towards web3.
What you’ll do?
- Build machine learning models, perform proof-of-concept, experiment, optimize, and deploy your models into production; work closely with software engineers to assist in productionizing your ML models.
- Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation, and serving.
- Research new and innovative machine learning approaches.
- Perform hands-on analysis and modeling of enormous data sets to develop insights that increase Ad Traffic and Campaign Efficacy.
- Collaborate with other data scientists, data engineers, product managers, and business stakeholders to build well-crafted, pragmatic data products.
- Actively take on new projects and constantly try to improve the existing models and infrastructure necessary for offline and online experimentation and iteration.
- Work with your team on ambiguous problem areas in existing or new ML initiatives
What are we looking for?
- Ability to write a SQL query to pull the data you need.
- Fluency in Python and familiarity with its scientific stack such as numpy, pandas, scikit learn, matplotlib.
- Experience in Tensorflow and/or R Modelling and/or PyTorch
- Ability to understand a business problem and translate, and structure it into a data science problem.
Job Category: Data Science
Job Type: Full Time
Job Location: Bangalore
at Altimetrik
Experience : 5-8 years
Location : Bangalore,Chennai and hyderabad
Python Developer (1 Position)
Must have skills:
· Experience in advanced Python
· Experience in GUI/Test Automation tools/libraries (Robot Framework, Selenium, & Sikuli etc.)
· Ability to create UI Automation scripts to execute in Remote/Citrix servers
· Knowledge in analytical libraries like Pandas, Numpy, Scipy, PyTorch etc.
· AWS skillset
Nice to have skills:
· Experience in SQL and Big Data analytic tools like Hive and Hue
· Experience in Machine learning
· Experience in Linux administration
Experienced in writing complex SQL select queries (window functions & CTE’s) with advanced SQL experience
Should be an individual contributor for initial few months based on project movement team will be aligned
Strong in querying logic and data interpretation
Solid communication and articulation skills
Able to handle stakeholders independently with less interventions of reporting manager
Develop strategies to solve problems in logical yet creative ways
Create custom reports and presentations accompanied by strong data visualization and storytelling
Job Description – Data Science
Basic Qualification:
- ME/MS from premier institute with a background in Mechanical/Industrial/Chemical/Materials engineering.
- Strong Analytical skills and application of Statistical techniques to problem solving
- Expertise in algorithms, data structures and performance optimization techniques
- Proven track record of demonstrating end to end ownership involving taking an idea from incubator to market
- Minimum years of experience in data analysis (2+), statistical analysis, data mining, algorithms for optimization.
Responsibilities
The Data Engineer/Analyst will
- Work with stakeholders throughout the organization to identify opportunities for leveraging company data to drive business solutions.
- Clear interaction with Business teams including product planning, sales, marketing, finance for defining the projects, objectives.
- Mine and analyze data from company databases to drive optimization and improvement of product and process development, marketing techniques and business strategies
- Coordinate with different R&D and Business teams to implement models and monitor outcomes.
- Mentor team members towards developing quick solutions for business impact.
- Skilled at all stages of the analysis process including defining key business questions, recommending measures, data sources, methodology and study design, dataset creation, analysis execution, interpretation and presentation and publication of results.
- 4+ years’ experience in MNC environment with projects involving ML, DL and/or DS
- Experience in Machine Learning, Data Mining or Machine Intelligence (Artificial Intelligence)
- Knowledge on Microsoft Azure will be desired.
- Expertise in machine learning such as Classification, Data/Text Mining, NLP, Image Processing, Decision Trees, Random Forest, Neural Networks, Deep Learning Algorithms
- Proficient in Python and its various libraries such as Numpy, MatPlotLib, Pandas
- Superior verbal and written communication skills, ability to convey rigorous mathematical concepts and considerations to Business Teams.
- Experience in infra development / building platforms is highly desired.
- A drive to learn and master new technologies and techniques.
About Us:
Small businesses are the backbone of the US economy, comprising almost half of the GDP and the private workforce. Yet, big banks don’t provide the access, assistance and modern tools that owners need to successfully grow their business.
We started Novo to challenge the status quo—we’re on a mission to increase the GDP of the modern entrepreneur by creating the go-to banking platform for small businesses (SMBs). Novo is flipping the script of the banking world, and we’re excited to lead the small business banking revolution.
At Novo, we’re here to help entrepreneurs, freelancers, startups and SMBs achieve their financial goals by empowering them with an operating system that makes business banking as easy as iOS. We developed modern bank accounts and tools to help to save time and increase cash flow. Our unique product integrations enable easy access to tracking payments, transferring money internationally, managing business transactions and more. We’ve made a big impact in a short amount of time, helping thousands of organizations access powerfully simple business banking.
We are looking for a Senior Data Scientist who is enthusiastic about using data and technology to solve complex business problems. If you're passionate about leading and helping to architect and develop thoughtful data solutions, then we want to chat. Are you ready to revolutionize the small business banking industry with us?
About the Role: (specific to the role-- describe the role activities/duties, who they interact with, what they are accountable for, how the role operates in the team, department and organization)
- Build and manage predictive models focussed on credit risk, fraud, conversions, churn, consumer behaviour etc
- Provides best practices, direction for data analytics and business decision making across multiple projects and functional areas
- Implements performance optimizations and best practices for scalable data models, pipelines and modelling
- Resolve blockers and help the team stay productive
- Take part in building the team and iterating on hiring processes
Requirements for the Role: (these are specific to the role-- technical skills and requirements to fulfill the job duties, certifications, years of experience, degree)
- 4+ years of experience in data science roles focussed on managing data processes, modelling and dashboarding
- Strong experience in python, SQL and in-depth understanding of modelling techniques
- Experience working with Pandas, scikit learn, visualization libraries like plotly, bokeh etc.
- Prior experience with credit risk modelling will be preferred
- Deep Knowledge of Python to write scripts to manipulate data and generate automated reports
How We Define Success: (these are specific to the role-- should be tied to performance management, OKRs or general goals)
- Expand access to data driven decision making across the organization
- Solve problems in risk, marketing, growth, customer behaviour through analytics models that increase efficacy
Nice To Have, but Not Required:
- Experience in dashboarding libraries like Python Dash and exposure to CI/CD
- Exposure to big data tools like Spark, and some core tech knowledge around API’s, data streaming etc.
Novo values diversity as a core tenant of the work we do and the businesses we serve. We are an equal opportunity employer, indiscriminate of race, religion, ethnicity, national origin, citizenship, gender, gender identity, sexual orientation, age, veteran status, disability, genetic information or any other protected characteristic.
What we look for:
We are looking for an associate who will be doing data crunching from various sources and finding the key points from the data. Also help us to improve/build new pipelines as per the requests. Also, this associate will be helping us to visualize the data if required and find flaws in our existing algorithms.
Responsibilities:
- Work with multiple stakeholders to gather the requirements of data or analysis and take action on them.
- Write new data pipelines and maintain the existing pipelines.
- Person will be gathering data from various DB’s and will be finding the required metrics out of it.
Required Skills:
- Experience with python and Libraries like Pandas,and Numpy.
- Experience in SQL and understanding of NoSQL DB’s.
- Hands-on experience in Data engineering.
- Must have good analytical skills and knowledge of statistics.
- Understanding of Data Science concepts.
- Bachelor degree in Computer Science or related field.
- Problem-solving skills and ability to work under pressure.
Nice to have:
- Experience in MongoDB or any NoSql DB.
- Experience in ElasticSearch.
- Knowledge of Tableau, Power BI or any other visualization tool.
BRIEF DESCRIPTION:
At-least 1 year of Python, Spark, SQL, data engineering experience
Primary Skillset: PySpark, Scala/Python/Spark, Azure Synapse, S3, RedShift/Snowflake
Relevant Experience: Legacy ETL job Migration to AWS Glue / Python & Spark combination
ROLE SCOPE:
Reverse engineer the existing/legacy ETL jobs
Create the workflow diagrams and review the logic diagrams with Tech Leads
Write equivalent logic in Python & Spark
Unit test the Glue jobs and certify the data loads before passing to system testing
Follow the best practices, enable appropriate audit & control mechanism
Analytically skillful, identify the root causes quickly and efficiently debug issues
Take ownership of the deliverables and support the deployments
REQUIREMENTS:
Create data pipelines for data integration into Cloud stacks eg. Azure Synapse
Code data processing jobs in Azure Synapse Analytics, Python, and Spark
Experience in dealing with structured, semi-structured, and unstructured data in batch and real-time environments.
Should be able to process .json, .parquet and .avro files
PREFERRED BACKGROUND:
Tier1/2 candidates from IIT/NIT/IIITs
However, relevant experience, learning attitude takes precedence
Job Description
Lead Machine Learning (ML)/
NLP Engineer
5 + years of experience
About Contify
Contify is an AI-enabled Market and Competitive Intelligence (MCI)
software to help professionals make informed decisions. Its B2B SaaS
platform helps leading organizations such as Ericsson, EY, Wipro,
Deloitte, L&T, BCG, MetLife, etc. track information on their competitors,
customers, industries, and topics of interest by continuously monitoring
over 500,000+ sources on a real-time basis. Contify is rapidly growing
with 185+ people across two offices in India. Contify is the winner of
Frost and Sullivan’s Product Innovation Award for Market and
Competitive Intelligence Platforms.
The role
We are looking for a hardworking, aspirational, and innovative
engineering person for the Lead ML/ NLP Engineer position. You’ll build
Contify’s ML and NLP capabilities and help us extract value from
unstructured data. Using advanced NLP, ML, and text analytics, you will
develop applications that will extract business insights by analyzing a
large amount of unstructured text information, identifying patterns, and
by connecting the events.
Responsibilities:
You will be responsible for all the processes from data collection, and
pre-processing, to training models and deploying them to production.
➔ Understand the business objectives; design and deploy scalable
ML models/ NLP applications to meet those objectives
➔ Use of NLP techniques for text representation, semantic analysis,
information extraction, to meet the business objectives in an
efficient manner along with metrics to measure progress
➔ Extend existing ML libraries and frameworks and use effective text
representations to transform natural language into useful features
➔ Defining and supervising the data collection process, verifying data
quality, and employing data augmentation techniques
➔ Defining the preprocessing or feature engineering to be done on a
given dataset
➔ Analyze the errors of the model and design strategies to overcome
them
➔ Research and implement the right algorithms and tools for ML/
NLP tasks
➔ Collaborate with engineering and product development teams
➔ Represent Contify in external ML industry events and publish
thought leadership articles
Desired Skills and Experience
To succeed in this role, you should possess outstanding skills in
statistical analysis, machine learning methods, and text representation
techniques.
➔ Deep understanding of text representation techniques (such as n-
grams, bag of words, sentiment analysis, etc), statistics and
classification algorithms
➔ Hand on experience in feature extraction techniques for text
classification and topic mining
➔ Knowledge of text analytics with a strong understanding of NLP
algorithms and models (GLMs, SVM, PCA, NB, Clustering, DTs)
and their underlying computational and probabilistic statistics
◆ Word Embedding like Tfidf, Word2Vec, GLove, FastText, etc.
◆ Language models like Bert, GPT, RoBERTa, XLNet
◆ Neural networks like RNN, GRU, LSTM, Bi-LSTM
◆ Classification algorithms like LinearSVC, SVM, LR
◆ XGB, MultinomialNB, etc.
◆ Other Algos- PCA, Clustering methods, etc
➔ Excellent knowledge and demonstrable experience in using NLP
packages such as NLTK, Word2Vec, SpaCy, Gensim, Standford
CoreNLP, TensorFlow/ PyTorch.
➔ Experience in setting up supervised & unsupervised learning
models including data cleaning, data analytics, feature creation,
model selection & ensemble methods, performance metrics &
visualization
➔ Evaluation Metrics- Root Mean Squared Error, Confusion Matrix, F
Score, AUC – ROC, etc
➔ Understanding of knowledge graph will be a plus
Qualifications
➔ Education: Bachelors or Masters in Computer Science,
Mathematics, Computational Linguistics or similar field
➔ At least 4 years' experience building Machine Learning & NLP
solutions over open-source platforms such as SciKit-Learn,
Tensorflow, SparkML, etc
➔ At least 2 years' experience in designing and developing
enterprise-scale NLP solutions in one or more of: Named Entity
Recognition, Document Classification, Feature Extraction, Triplet
Extraction, Clustering, Summarization, Topic Modelling, Dialog
Systems, Sentiment Analysis
➔ Self-starter who can see the big picture, and prioritize your work to
make the largest impact on the business’ and customer’s vision
and requirements
➔ Being a committer or a contributor to an open-source project is a
plus
Note
Contify is a people-oriented company. Emotional intelligence, therefore,
is a must. You should enjoy working in a team environment, supporting
your teammates in pursuit of our common goals, and working with your
colleagues to drive customer value. You strive to not only improve
yourself, but also those around you.
Top Management Consulting Company
We are looking out for a technically driven "Full-Stack Engineer" for one of our premium client
COMPANY DESCRIPTION:
Required Skills
• Hands-on experience with NodeJS, React, Redux, & Docker
• Great to have understanding about Kubernetes, Postgres and AWS (SQS, Lambda, S3)
• Experience implementing micro service technology
• Experience working with Python and Pandas, used for data manipulation is a plus
• Experience with Power BI and its APIs is a plus
• Experience with building and maintaining large data sets
• Ability to work across structured, semi-structured and unstructured data, extracting information
and identifying linkages across disparate data sets
• Understanding of information security principles
• Ability to understand complex systems and solve challenging problems
• Ability to clearly communicate complex solutions
• Ability to learn new technologies quickly
• Comfortable in a fast paced small team environment
• Open to work with global team structure, flexible and efficient
• Ability and flexibility to manage multiple assignments in a dynamic, complex and fast-paced
environment
• High level of attention to detail
• Commercial client-facing project experience is a plus
• Business-level language skills and fluency in English
- B.E Computer Science or equivalent.
- In-depth knowledge of machine learning algorithms and their applications including
practical experience with and theoretical understanding of algorithms for classification,
regression and clustering.
- Hands-on experience in computer vision and deep learning projects to solve real world
problems involving vision tasks such as object detection, Object tracking, instance
segmentation, activity detection, depth estimation, optical flow, multi-view geometry,
domain adaptation etc.
- Strong understanding of modern and traditional Computer Vision Algorithms.
- Experience in one of the Deep Learning Frameworks / Networks: PyTorch, TensorFlow,
Darknet (YOLO v4 v5), U-Net, Mask R-CNN, EfficientDet, BERT etc.
- Proficiency with CNN architectures such as ResNet, VGG, UNet, MobileNet, pix2pix,
and Cycle GAN.
- Experienced user of libraries such as OpenCV, scikit-learn, matplotlib and pandas.
- Ability to transform research articles into working solutions to solve real-world problems.
- High proficiency in Python programming knowledge.
- Familiar with software development practices/pipelines (DevOps- Kubernetes, docker
containers, CI/CD tools).
- Strong communication skills.
-
Fix issues with plugins for our Python-based ETL pipelines
-
Help with automation of standard workflow
-
Deliver Python microservices for provisioning and managing cloud infrastructure
-
Responsible for any refactoring of code
-
Effectively manage challenges associated with handling large volumes of data working to tight deadlines
-
Manage expectations with internal stakeholders and context-switch in a fast-paced environment
-
Thrive in an environment that uses AWS and Elasticsearch extensively
-
Keep abreast of technology and contribute to the engineering strategy
-
Champion best development practices and provide mentorship to others
-
First and foremost you are a Python developer, experienced with the Python Data stack
-
You love and care about data
-
Your code is an artistic manifest reflecting how elegant you are in what you do
-
You feel sparks of joy when a new abstraction or pattern arises from your code
-
You support the manifests DRY (Don’t Repeat Yourself) and KISS (Keep It Short and Simple)
-
You are a continuous learner
-
You have a natural willingness to automate tasks
-
You have critical thinking and an eye for detail
-
Excellent ability and experience of working to tight deadlines
-
Sharp analytical and problem-solving skills
-
Strong sense of ownership and accountability for your work and delivery
-
Excellent written and oral communication skills
-
Mature collaboration and mentoring abilities
-
We are keen to know your digital footprint (community talks, blog posts, certifications, courses you have participated in or you are keen to, your personal projects as well as any kind of contributions to the open-source communities if any)
-
Delivering complex software, ideally in a FinTech setting
-
Experience with CI/CD tools such as Jenkins, CircleCI
-
Experience with code versioning (git / mercurial / subversion)
A Reputed Analytics Consulting Company in Data Science field
Job Title : Analyst / Sr. Analyst – Data Science Developer - Python
Exp : 2 to 5 yrs
Loc : B’lore / Hyd / Chennai
NP: Candidate should join us in 2 months (Max) / Immediate Joiners Pref.
About the role:
We are looking for an Analyst / Senior Analyst who works in the analytics domain with a strong python background.
Desired Skills, Competencies & Experience:
• • 2-4 years of experience in working in the analytics domain with a strong python background. • • Visualization skills in python with plotly, matplotlib, seaborn etc. Ability to create customized plots using such tools. • • Ability to write effective, scalable and modular code. Should be able to understand, test and debug existing python project modules quickly and contribute to that. • • Should be familiarized with Git workflows.
Good to Have: • • Familiarity with cloud platforms like AWS, AzureML, Databricks, GCP etc. • • Understanding of shell scripting, python package development. • • Experienced with Python data science packages like Pandas, numpy, sklearn etc. • • ML model building and evaluation experience using sklearn.
|
About us
SteelEye is the only regulatory compliance technology and data analytics firm that offers transaction reporting, record keeping, trade reconstruction, best execution and data insight in one comprehensive solution. The firm’s scalable secure data storage platform offers encryption at rest and in flight and best-in-class analytics to help financial firms meet regulatory obligations and gain competitive advantage.
The company has a highly experienced management team and a strong board, who have decades of technology and management experience and worked in senior positions at many leading international financial businesses. We are a young company that shares a commitment to learning, being smart, working hard and being honest in all we do and striving to do that better each day. We value all our colleagues equally and everyone should feel able to speak up, propose an idea, point out a mistake and feel safe, happy and be themselves at work.
Being part of a start-up can be equally exciting as it is challenging. You will be part of the SteelEye team not just because of your talent but also because of your entrepreneurial flare which we thrive on at SteelEye. This means we want you to be curious, contribute, ask questions and share ideas. We encourage you to get involved in helping shape our business. What you'll do
What you will do?
- Deliver plugins for our python based ETL pipelines.
- Deliver python services for provisioning and managing cloud infrastructure.
- Design, Develop, Unit Test, and Support code in production.
- Deal with challenges associated with large volumes of data.
- Manage expectations with internal stakeholders and context switch between multiple deliverables as priorities change.
- Thrive in an environment that uses AWS and Elasticsearch extensively.
- Keep abreast of technology and contribute to the evolution of the product.
- Champion best practices and provide mentorship.
What we're looking for
- Python 3.
- Python libraries used for data (such as pandas, numpy).
- AWS.
- Elasticsearch.
- Performance tuning.
- Object Oriented Design and Modelling.
- Delivering complex software, ideally in a FinTech setting.
- CI/CD tools.
- Knowledge of design patterns.
- Sharp analytical and problem-solving skills.
- Strong sense of ownership.
- Demonstrable desire to learn and grow.
- Excellent written and oral communication skills.
- Mature collaboration and mentoring abilities.
What will you get?
- This is an individual contributor role. So, if you are someone who loves to code and solve complex problems and build amazing products and not worry about anything else, this is the role for you.
- You will have the chance to learn from the best in the business who have worked across the world and are technology geeks.
- Company that always appreciates ownership and initiative. If you are someone who is full of ideas, this role is for you.
Client of People First Consultants
Key skills : Python, Numpy, Panda, SQL, ETL
Roles and Responsibilities:
- The work will involve the development of workflows triggered by events from other systems
- Design, develop, test, and deliver software solutions in the FX Derivatives group
- Analyse requirements for the solutions they deliver, to ensure that they provide the right solution
- Develop easy to use documentation for the frameworks and tools developed for adaption by other teams
Familiarity with event-driven programming in Python
- Must have unit testing and debugging skills
- Good problem solving and analytical skills
- Python packages such as NumPy, Scikit learn
- Testing and debugging applications.
- Developing back-end components.
Key Skills Required :
- Proficiency in Python 3.x based web and backend development
- Solid understanding of Python concepts
- Strong experience in building web applications using Django
- Experience building REST APIs using DRF or Flask
- Experience with some form of Machine Learning (ML)
- Experience in using libraries such as Numpy and Pandas
- Hands on experience with RDBMS such as Postgres or MySQL including querying
- Comfort with Git repositories, branching and deployment using Git
- Working experience with Docker
- Basic working knowledge of ReactJs
- Experience in deploying Django applications to AWS,Digital Ocean or Heroku
Responsibilities :
- Understanding requirement and congributing to engineering solutions at a conceptual stage to provide the best possible solution to the task/challenge
- Building high quality code using coding standards based on the SRS/Documentation
- Building component based, maintainable, scalable and reusable backend libraries/modules.
- Building & documenting scalable APIs on the Open Spec standard
- Unit testing development modules and APIs
- Conducting code reviews to ensure that the highest quality standard are maintained
- Securing backend applications and APIs using industry best practices
- Troubleshooting issues and fixing bugs raised by the QA team efficiently.
- Optimizing code
- Building and deploying the applications
About the Company
Blue Sky Analytics is a Climate Tech startup that combines the power of AI & Satellite data to aid in the creation of a global environmental data stack. Our funders include Beenext and Rainmatter. Over the next 12 months, we aim to expand to 10 environmental data-sets spanning water, land, heat, and more!
We are looking for a data scientist to join its growing team. This position will require you to think and act on the geospatial architecture and data needs (specifically geospatial data) of the company. This position is strategic and will also require you to collaborate closely with data engineers, data scientists, software developers and even colleagues from other business functions. Come save the planet with us!
Your Role
Manage: It goes without saying that you will be handling large amounts of image and location datasets. You will develop dataframes and automated pipelines of data from multiple sources. You are expected to know how to visualize them and use machine learning algorithms to be able to make predictions. You will be working across teams to get the job done.
Analyze: You will curate and analyze vast amounts of geospatial datasets like satellite imagery, elevation data, meteorological datasets, openstreetmaps, demographic data, socio-econometric data and topography to extract useful insights about the events happening on our planet.
Develop: You will be required to develop processes and tools to monitor and analyze data and its accuracy. You will develop innovative algorithms which will be useful in tracking global environmental problems like depleting water levels, illegal tree logging, and even tracking of oil-spills.
Demonstrate: A familiarity with working in geospatial libraries such as GDAL/Rasterio for reading/writing of data, and use of QGIS in making visualizations. This will also extend to using advanced statistical techniques and applying concepts like regression, properties of distribution, and conduct other statistical tests.
Produce: With all the hard work being put into data creation and management, it has to be used! You will be able to produce maps showing (but not limited to) spatial distribution of various kinds of data, including emission statistics and pollution hotspots. In addition, you will produce reports that contain maps, visualizations and other resources developed over the course of managing these datasets.
Requirements
These are must have skill-sets that we are looking for:
- Excellent coding skills in Python (including deep familiarity with NumPy, SciPy, pandas).
- Significant experience with git, GitHub, SQL, AWS (S3 and EC2).
- Worked on GIS and is familiar with geospatial libraries such as GDAL and rasterio to read/write the data, a GIS software such as QGIS for visualisation and query, and basic machine learning algorithms to make predictions.
- Demonstrable experience implementing efficient neural network models and deploying them in a production environment.
- Knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests and proper usage, etc.) and experience with applications.
- Capable of writing clear and lucid reports and demystifying data for the rest of us.
- Be curious and care about the planet!
- Minimum 2 years of demonstrable industry experience working with large and noisy datasets.
Benefits
- Work from anywhere: Work by the beach or from the mountains.
- Open source at heart: We are building a community where you can use, contribute and collaborate on.
- Own a slice of the pie: Possibility of becoming an owner by investing in ESOPs.
- Flexible timings: Fit your work around your lifestyle.
- Comprehensive health cover: Health cover for you and your dependents to keep you tension free.
- Work Machine of choice: Buy a device and own it after completing a year at BSA.
- Quarterly Retreats: Yes there's work-but then there's all the non-work+fun aspect aka the retreat!
- Yearly vacations: Take time off to rest and get ready for the next big assignment by availing the paid leaves.
Technical Experience :
- 2-6 years of Python working experience
- Expertise in at least one popular Python framework /Django/ Flask
- Knowledge of object-relational mapping d
- Familiarity with front-end technologies JavaScript and HTML5
Key Responsibilities :
- Write effective, scalable code
- Develop back-end components to improve responsiveness and overall performance
- Integrate user-facing elements into applications
- Test and debug programs5 Improve functionality
Job Description
JD - Python Developer
Responsibilities
- Design and implement software features based on requirements
- Architect new features for products or tools
- Articulate and document designs as needed
- Prepare and present technical training
- Provide estimates and status for development tasks
- Work effectively in a highly collaborative and iterative development process
- Work effectively with the Product, QA, and DevOps team.
- Troubleshoot issues and correct defects when required
- Build unit and integration tests that assure correct behavior and increase the maintainability of the code base
- Apply dev-ops and automation as needed
- Commit to continuous learning and enhancement of skills and product knowledge
Required Qualifications
- Minimum of 5 years of relevant experience in development and design
- Proficiency in Python and extensive knowledge of the associated libraries Extensive experience with Python data science libraries: TensorFlow, NumPy, SciPy, Pandas, etc.
- Strong skills in producing visuals with algorithm results
- Strong SQL and working knowledge of Microsoft SQL Server and other data storage technologies
- Strong web development skills Advance knowledge with ORM and data access patterns
- Experienced working using Scrum and Agile methodologies
- Excellent debugging and troubleshooting skills
- Deep knowledge of DevOps practices and cloud services
- Strong collaboration and verbal and written communication skills
- Self-starter, detail-oriented, organized, and thorough
- Strong interpersonal skills and a team-oriented mindset
- Fast learner and creative capacity for developing innovative solutions to complex problems
Skills
PYTHON, SQL, TensorFlow, NumPy, SciPy, Pandas
Develop state of the art algorithms in the fields of Computer Vision, Machine Learning and Deep Learning.
Provide software specifications and production code on time to meet project milestones Qualifications
BE or Master with 3+ years of experience
Must have Prior knowledge and experience in Image processing and Video processing • Should have knowledge of object detection and recognition
Must have experience in feature extraction, segmentation and classification of the image
Face detection, alignment, recognition, tracking & attribute recognition
Excellent Understanding and project/job experience in Machine learning, particularly in areas of Deep Learning – CNN, RNN, TENSORFLOW, KERAS etc.
Real world expertise in deep learning- applied to Computer Vision problems • Strong foundation in Mathematics
Strong development skills in Python
Must have worked upon Vision and deep learning libraries and frameworks such as Opencv, Tensorflow, Pytorch, keras
Quick learner of new technologies
Ability to work independently as well as part of a team
Knowledge of working closely with Version Control(GIT)
- Use data to develop machine learning models that optimize decision making in Credit Risk, Fraud, Marketing, and Operations
- Implement data pipelines, new features, and algorithms that are critical to our production models
- Create scalable strategies to deploy and execute your models
- Write well designed, testable, efficient code
- Identify valuable data sources and automate collection processes.
- Undertake to preprocess of structured and unstructured data.
- Analyze large amounts of information to discover trends and patterns.
Requirements:
- 1+ years of experience in applied data science or engineering with a focus on machine learning
- Python expertise with good knowledge of machine learning libraries, tools, techniques, and frameworks (e.g. pandas, sklearn, xgboost, lightgbm, logistic regression, random forest classifier, gradient boosting regressor etc)
- strong quantitative and programming skills with a product-driven sensibility
Positions : 2-3
CTC Offering : 40,000 to 55,000/month
Job Location: Remote for 6-12 months due to the pandemic, then Mumbai, Maharashtra
Required experience:
Minimum 1.5 to 2 years of experience in Web & Backend Development using Python and Django with experience in some form of Machine Learning ML Algorithms
Overview
We are looking for Python developers with a strong understanding of object orientation and experience in web and backend development. Experience with Analytical algorithms and mathematical calculations using libraries such as Numpy and Pandas are a must. Experience in some form of Machine Learning. We require candidates who have working experience using Django Framework and DRF
Key Skills required (Items in Bold are mandatory keywords) :
1. Proficiency in Python 3.x based web and backend development
2. Solid understanding of Python concepts
3. Strong experience in building web applications using Django
4. Experience building REST APIs using DRF or Flask
5. Experience with some form of Machine Learning (ML)
6. Experience in using libraries such as Numpy and Pandas
7. Some form of experience with NLP and Deep Learning using any of Pytorch, Tensorflow, Keras, Scikit-learn or similar
8. Hands on experience with RDBMS such as Postgres or MySQL
9. Comfort with Git repositories, branching and deployment using Git
10. Working experience with Docker
11. Basic working knowledge of ReactJs
12. Experience in deploying Django applications to AWS,Digital Ocean or Heroku
KRAs includes :
1. Understanding the scope of work
2. Understanding and adopting the current internal development workflow and processes
3. Understanding client requirements as communicated by the project manager
4. Arriving on timelines for projects, either independently or as a part of a team
5. Executing projects either independently or as a part of a team
6. Developing products and projects using Python
7. Writing code to collect and mathematically analyse large volumes of data.
8. Creating backend modules in Python by building or reutilizing existing modules in a manner so as to provide optimal deliveries on time
9. Writing Scalable, maintainable code
10. Building secured REST APIs
11. Setting up batch task processing environments using Celery
12. Unit testing prepared modules
13. Bug fixing issues as reported by the QA team
14. Optimization and performance tuning of code
Bonus but not mandatory
1. Nodejs
2. Redis
3. PHP
4. CI/CD
5. AWS
- Does analytics to extract insights from raw historical data of the organization.
- Generates usable training dataset for any/all MV projects with the help of Annotators, if needed.
- Analyses user trends, and identifies their biggest bottlenecks in Hammoq Workflow.
- Tests the short/long term impact of productized MV models on those trends.
- Skills - Numpy, Pandas, SPARK, APACHE SPARK, PYSPARK, ETL mandatory.
Hammoq Inc is a rapidly growing startup in the reselling sector. Our app provides product listings, cross-platform data analytics, and Cross-platform delisting as our core services.
Launched Web app in 2020 and iOS app at the start of 2021, we are continuing our exponential growth, and we were hoping you could play a core role in our mission.
Hammoq is looking for a Senior ML/Machine Vision Architect / Researcher, an expert in Deep Learning, to join our passionate developers' team to create our unique SaaS web app.
The ideal candidate will be responsible for developing new Machine Learning / Machine vision models according to the business needs.
*What you'll do
- You’ll lead the ML R&D process at Hammoq.
- You will build ML architectures to optimise the process.
- You'll collaborate with our hardworking, nimble, and supportive team through daily standups, company presentations, product demos, slack discussions
- You'll work on solving machine vision / Machine Learning problems and implementations.
- You'll use ML libraries of IOS and Android to build and run models on the mobile devices
Skills and expertise that will help you succeed
- Must have experience working with OpenCV, TensorFlow, and Keras environment
- Must have the ability to develop your own models.
- Working experience of training and deploying computer vision models
- Experience in Computer Vision and Machine Learning (including Deep Learning) algorithms.
- Experience in image analytics - including feature extraction, object detection, classification, and tracking
- Experience in image manipulation
- PhD in Computer Vision , Machine Learning, Machine Vision or any related field is a must.
- Strong programming skills in Python, including NumPy, Scikit Learn, Pandas, and Matplotlib
- Self-governing analytical problem-solving skills for efficient and uninterrupted development of solutions
- Strong communications skills for an adequate description of technical concepts to others
Nice to have
- Experience in building APIs implementing ML models
- Knowledge or basic understanding of any Cloud ML technologies or Cloud ML service providers.
- Experience in the e-commerce industry
Position description:
- Architecture & Design systems for Predictive analysis and writing algorithms to deal with financial data
- Must have experience on web services and APIs (REST, JSON, and similar) and creation and consumption of RESTful APIs
- Proficiently writing algorithms with Python/Pandas/Numpy; Jupyter/PyCharm
- Experience with relational and NoSQL databases (Eg. MSSQL, MongoDB, Redshift, PostgreSQL, Redis)
- Implementing Machine Learning Models using Python/R for best performance
- Working with Time Series Data & analyzing large data sets.
- Implementing financial strategies in python and generating reports to analyze the strategy results.
Primary Responsibilities:
- Writing algorithms to deal with financial data and Implementing financial strategies in (Python, SQL) and generating reports to analyze the strategy results.
Educational qualifications preferred Degree: Bachelors degree
Required Knowledge:
- Highly skilled in SQL, Python, Pandas, Numpy, Machine Learning, Predictive Modelling, Algorithm designing, OOPS concepts
- 2 - 7 years Full-Time working experience on core SQL, Python role (Non-Support)
- Bachelor’s Degree in Engineering, equivalent or higher education.
- Writing algorithms to deal with financial data and Implementing financial strategies in (Python, SQL) and generating reports to analyze the strategy results.
Job Description:
We are looking for an exceptional Data Scientist Lead / Manager who is passionate about data and motivated to build large scale machine learning solutions to shine our data products. This person will be contributing to the analytics of data for insight discovery and development of machine learning pipeline to support modeling of terabytes of daily data for various use cases.
Location: Pune (Initially remote due to COVID 19)
*****Looking for someone who can start immediately / Within a month. Hands-on experience in Python programming (Minimum 5 Years) is a must.
About the Organisation :
- It provides a dynamic, fun workplace filled with passionate individuals. We are at the cutting edge of advertising technology and there is never a dull moment at work.
- We have a truly global footprint, with our headquarters in Singapore and offices in Australia, United States, Germany, United Kingdom and India.
- You will gain work experience in a global environment. We speak over 20 different languages, from more than 16 different nationalities and over 42% of our staff are multilingual.
Qualifications:
• 8+ years relevant working experience
• Master / Bachelors in computer science or engineering
• Working knowledge of Python and SQL
• Experience in time series data, data manipulation, analytics, and visualization
• Experience working with large-scale data
• Proficiency of various ML algorithms for supervised and unsupervised learning
• Experience working in Agile/Lean model
• Experience with Java and Golang is a plus
• Experience with BI toolkit such as Tableau, Superset, Quicksight, etc is a plus
• Exposure to building large-scale ML models using one or more of modern tools and libraries such as AWS Sagemaker, Spark ML-Lib, Dask, Tensorflow, PyTorch, Keras, GCP ML Stack
• Exposure to modern Big Data tech such as Cassandra/Scylla, Kafka, Ceph, Hadoop, Spark
• Exposure to IAAS platforms such as AWS, GCP, Azure
Typical persona: Data Science Manager/Architect
Experience: 8+ years programming/engineering experience (with at least last 4 years in Data science in a Product development company)
Type: Hands-on candidate only
Must:
a. Hands-on Python: pandas,scikit-learn
b. Working knowledge of Kafka
c. Able to carry out own tasks and help the team in resolving problems - logical or technical (25% of job)
d. Good on analytical & debugging skills
e. Strong communication skills
Desired (in order of priorities)
a.Go (Strong advantage)
b. Airflow (Strong advantage)
c. Familiarity & working experience on more than one type of database: relational, object, columnar, graph and other unstructured databases
d. Data structures, Algorithms
e. Experience with multi-threaded and thread sync concepts
f. AWS Sagemaker
g. Keras
at golden eagle it technologies pvt ltd
Skills/Requirements:
- Python
- Django
GEITPL is looking for Python Developer Please find below the JD- 1-6 years Ambitious, hardworking and self-motivated optimistic individual Eager to learn diverse open source technologies and work in dynamic work environment requiring constant learning Good understanding of OOPs concepts Should have clear understanding of classes, functions and data types in Python Knowledge of advanced Python concepts like decorators and memory management Should have knowledge of Django framework and Flask framework Should have working knowledge of ReactJS Good analytical and problem-solving skills with ability to work in groups Good communication skills Fast paced self-learning individual with clear thinking and analytical-logical approach to intellectual work Open source contributions would be considered as a plus point So, if you have 1+ years of experience developing applications with Python Developer, apply today! We will love to talk! Please submit Your complete application with salary expectations
Quantsapp is India's first Option Trading Analytics platform on mobile. With
ever-growing users,
it makes us one of the fastest growing platform for options trading in India.
Quantsapp wants to accelerate its growth even more and capture new
countries which requires the development team to grow.
At Qauntsapp we are looking for a dynamic team mate to take up a role of
Server side development to support the brain behind the application.
Job Summary :
- You will be responsible for developing new logics / products / features as
described by the business / research team.
- An ideal candidate should be strong in mathematical processes like
optimizations, matrix algebra, differential equation, simulation process etc.
and should also possess decent hands-on experience on python, sql server and
preferably Aws. IIT graduation is a plus.
Responsibilities :
- Create algorithms from scratch
- Create products and backend API's as described by the business team
- Back-test and create hypothesis as desired by the research team
- Code the backend of logics for consumption by the UI team
- Deploy websockets, rest api's & dynamic tcp, udp based data flow
- Deployment and maintenance of codes with version control
Requirements :
- Should possess a good knowledge of advanced computing and mathematical
process
- Strong hands-on on Python and optionally Matlab
- Knowledge of databases like Sql & No Sql
- Ability to work with tight time lines
- In depth knowledge and good hands on experience on Pandas and Numpy
- Knowledge of Option Markets is a plus
- Excellent organizational and multitasking ability
- Experience on AWS Cloud is a plus
The candidate must have Expertise in ADF(Azure data factory), well versed with python.
Performance optimization of scripts (code) and Productionizing of code (SQL, Pandas, Python or PySpark, etc.)
Required skills:
Bachelors in - in Computer Science, Data Science, Computer Engineering, IT or equivalent
Fluency in Python (Pandas), PySpark, SQL, or similar
Azure data factory experience (min 12 months)
Able to write efficient code using traditional, OO concepts, modular programming following the SDLC process.
Experience in production optimization and end-to-end performance tracing (technical root cause analysis)
Ability to work independently with demonstrated experience in project or program management
Azure experience ability to translate data scientist code in Python and make it efficient (production) for cloud deployment
- Writing reusable, testable, and efficient code
- Design and implementation of low-latency, high-availability, and performant applications
- Analysis, Automate and backtest the existing strategies
- Employing statistical applications to visualize and/or transform the distribution or dispersion profile of individual signals and support the mathematical formulation of multiple signal algorithms to rank securities for investment.
- Candidates should have strong coding skills in Python (NumPy, Pandas, matplotlib, Zipline) /Django/Java script etc.
- Basic understanding of front-end technologies, such as HTML5, and CSS3
- Must have Database knowledge like Open source, PostgreSQL, MySQL/SQ
Learnability, teamwork and flexibility are important traits we look for.
- 3+ years experience in practical implementation and deployment of ML based systems preferred.
- BE/B Tech or M Tech (preferred) in CS/Engineering with strong mathematical/statistical background
- Strong mathematical and analytical skills, especially statistical and ML techniques, with familiarity with different supervised and unsupervised learning algorithms
- Implementation experiences and deep knowledge of Classification, Time Series Analysis, Pattern Recognition, Reinforcement Learning, Deep Learning, Dynamic Programming and Optimisation
- Experience in working on modeling graph structures related to spatiotemporal systems
- Programming skills in Python
- Experience in developing and deploying on cloud (AWS or Google or Azure)
- Good verbal and written communication skills
- Familiarity with well-known ML frameworks such as Pandas, Keras, TensorFlow
What you’ll do
- Deliver plugins for our Python-based ETL pipelines.
- Deliver Python microservices for provisioning and managing cloud infrastructure.
- Implement algorithms to analyse large data sets.
- Draft design documents that translate requirements into code.
- Deal with challenges associated with handling large volumes of data.
- Assume responsibilities from technical design through technical client support.
- Manage expectations with internal stakeholders and context-switch in a fast paced environment.
- Thrive in an environment that uses AWS and Elasticsearch extensively.
- Keep abreast of technology and contribute to the engineering strategy.
- Champion best development practices and provide mentorship.
What we’re looking for
- Experience in Python 3.
- Python libraries used for data (such as pandas, numpy).
- AWS.
- Elasticsearch.
- Performance tuning.
- Object Oriented Design and Modelling.
- Delivering complex software, ideally in a FinTech setting.
- CI/CD tools.
- Knowledge of design patterns.
- Sharp analytical and problem-solving skills.
- Strong sense of ownership.
- Demonstrable desire to learn and grow.
- Excellent written and oral communication skills.
- Mature collaboration and mentoring abilities.
About SteelEye Culture
- Work from home until you are vaccinated against COVID-19
- Top of the line health insurance • Order discounted meals every day from a dedicated portal
- Fair and simple salary structure
- 30+ holidays in a year
- Fresh fruits every day
- Centrally located. 5 mins to the nearest metro station (MG Road)
- Measured on output and not input
We are actively seeking software development engineers who are interested in designing robust trading systems and refining programs to efficiently manage various types of financial market data that facilitate our quantitative investment research. By designing and improving the firm's internal applications, the SDE will play a key role in expanding the firm's trading capabilities.
Responsibilities:
- Management & scaling up existing infrastructure for high-frequency market data capture.
- Develop a scalable and consistent data handling infrastructure for the above data to facilitate efficient backtesting of quantitative investment strategies.
- Perform R& D; to build a software platform in Python for backtesting various kind of investment strategies using the above databases.
- This will involve studying the strategy development process and performance evaluation metrics.
- Develop autopilot risk-management systems to monitor live performance of the Portfolio.
- Improve the existing algorithms to achieve better execution price and reduce the latency.
Requirements:
Our ideal candidate would have graduated with a degree in computer science from a top university with 1-3 years industry experience, along with:
- High Level of proficiency in Python and good knowledge of Matlab/C++/C#.
- Past experience in dealing with large datasets and Knowledge of database administration and network programming will be a plus.
- Well-versed in software engineering principles, frameworks and technologies.
- The ability to manage multiple tasks in a fast-paced environment.
- Excellent analytical and problem solving abilities.
- A keen interest in learning about the financial markets.
making api and wokring on aws and s3 is added advantage.