11+ Statistical signal processing Jobs in India
Apply to 11+ Statistical signal processing Jobs on CutShort.io. Find your next job, effortlessly. Browse Statistical signal processing Jobs and apply today!
along with metrics to track their progress
Managing available resources such as hardware, data, and personnel so that deadlines
are met
Analysing the ML algorithms that could be used to solve a given problem and ranking
them by their success probability
Exploring and visualizing data to gain an understanding of it, then identifying
differences in data distribution that could affect performance when deploying the model
in the real world
Verifying data quality, and/or ensuring it via data cleaning
Supervising the data acquisition process if more data is needed
Defining validation strategies
Defining the pre-processing or feature engineering to be done on a given dataset
Defining data augmentation pipelines
Training models and tuning their hyper parameters
Analysing the errors of the model and designing strategies to overcome them
Deploying models to production
You will:
- Create highly scalable AWS micro-services utilizing cutting edge cloud technologies.
- Design and develop Big Data pipelines handling huge geospatial data.
- Bring clarity to large complex technical challenges.
- Collaborate with Engineering leadership to help drive technical strategy.
- Project scoping, planning and estimation.
- Mentor and coach team members at different levels of experience.
- Participate in peer code reviews and technical meetings.
- Cultivate a culture of engineering excellence.
- Seek, implement and adhere to standards, frameworks and best practices in the industry.
- Participate in on-call rotation.
You have:
- Bachelor’s/Master’s degree in computer science, computer engineering or relevant field.
- 5+ years of experience in software design, architecture and development.
- 5+ years of experience using object-oriented languages (Java, Python).
- Strong experience with Big Data technologies like Hadoop, Spark, Map Reduce, Kafka, etc.
- Strong experience in working with different AWS technologies.
- Excellent competencies in data structures & algorithms.
Nice to have:
- Proven track record of delivering large scale projects, and an ability to break down large tasks into smaller deliverable chunks
- Experience in developing high throughput low latency backend services
- Affinity to spatial data structures and algorithms.
- Familiarity with Postgres DB, Google Places or Mapbox APIs
What we offer
At GroundTruth, we want our employees to be comfortable with their benefits so they can focus on doing the work they love.
- Unlimited Paid Time Off
- In Office Daily Catered Lunch
- Fully stocked snacks/beverages
- 401(k) employer match
- Health coverage including medical, dental, vision and option for HSA or FSA
- Generous parental leave
- Company-wide DEIB Committee
- Inclusion Academy Seminars
- Wellness/Gym Reimbursement
- Pet Expense Reimbursement
- Company-wide Volunteer Day
- Education reimbursement program
- Cell phone reimbursement
- Equity Analysis to ensure fair pay
at Persistent Systems
Location: Pune/Nagpur,Goa,Hyderabad/
Job Requirements:
- 9 years and above of total experience preferably in bigdata space.
- Creating spark applications using Scala to process data.
- Experience in scheduling and troubleshooting/debugging Spark jobs in steps.
- Experience in spark job performance tuning and optimizations.
- Should have experience in processing data using Kafka/Pyhton.
- Individual should have experience and understanding in configuring Kafka topics to optimize the performance.
- Should be proficient in writing SQL queries to process data in Data Warehouse.
- Hands on experience in working with Linux commands to troubleshoot/debug issues and creating shell scripts to automate tasks.
- Experience on AWS services like EMR.
- The ideal candidate is adept at using large data sets to find opportunities for product and process optimization and using models to test the effectiveness of different courses of action.
- Mine and analyze data from company databases to drive optimization and improvement of product development, marketing techniques and business strategies.
- Assess the effectiveness and accuracy of new data sources and data gathering techniques.
- Develop custom data models and algorithms to apply to data sets.
- Use predictive modeling to increase and optimize customer experiences, revenue generation, ad targeting and other business outcomes.
- Develop company A/B testing framework and test model quality.
- Develop processes and tools to monitor and analyze model performance and data accuracy.
Roles & Responsibilities
- Experience using statistical languages (R, Python, SQL, etc.) to manipulate data and draw insights from large data sets.
- Experience working with and creating data architectures.
- Looking for someone with 3-7 years of experience manipulating data sets and building statistical models
- Has a Bachelor's, Master's in Computer Science or another quantitative field
- Knowledge and experience in statistical and data mining techniques :
- GLM/Regression, Random Forest, Boosting, Trees, text mining,social network analysis, etc.
- Experience querying databases and using statistical computer languages :R, Python, SQL, etc.
- Experience creating and using advanced machine learning algorithms and statistics: regression, simulation, scenario analysis, modeling, clustering, decision trees,neural networks, etc.
- Experience with distributed data/computing tools: Map/Reduce, Hadoop, Hive, Spark, Gurobi, MySQL, etc.
- Experience visualizing/presenting data for stakeholders using: Periscope, Business Objects, D3, ggplot, etc.
at Fragma Data Systems
Experience Range |
2 Years - 10 Years |
Function | Information Technology |
Desired Skills |
Must Have Skills:
• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skills
|
Education Type | Engineering |
Degree / Diploma | Bachelor of Engineering, Bachelor of Computer Applications, Any Engineering |
Specialization / Subject | Any Specialisation |
Job Type | Full Time |
Job ID | 000018 |
Department | Software Development |
Experienced in writing complex SQL select queries (window functions & CTE’s) with advanced SQL experience
Should be an individual contributor for initial few months based on project movement team will be aligned
Strong in querying logic and data interpretation
Solid communication and articulation skills
Able to handle stakeholders independently with less interventions of reporting manager
Develop strategies to solve problems in logical yet creative ways
Create custom reports and presentations accompanied by strong data visualization and storytelling
Hammoq Inc is a rapidly growing startup in the reselling sector. Our app provides product listings, cross-platform data analytics, and Cross-platform delisting as our core services.
Launched Web app in 2020 and iOS app at the start of 2021, we are continuing our exponential growth, and we were hoping you could play a core role in our mission.
Hammoq is looking for a Senior ML/Machine Vision Architect / Researcher, an expert in Deep Learning, to join our passionate developers' team to create our unique SaaS web app.
The ideal candidate will be responsible for developing new Machine Learning / Machine vision models according to the business needs.
*What you'll do
- You’ll lead the ML R&D process at Hammoq.
- You will build ML architectures to optimise the process.
- You'll collaborate with our hardworking, nimble, and supportive team through daily standups, company presentations, product demos, slack discussions
- You'll work on solving machine vision / Machine Learning problems and implementations.
- You'll use ML libraries of IOS and Android to build and run models on the mobile devices
Skills and expertise that will help you succeed
- Must have experience working with OpenCV, TensorFlow, and Keras environment
- Must have the ability to develop your own models.
- Working experience of training and deploying computer vision models
- Experience in Computer Vision and Machine Learning (including Deep Learning) algorithms.
- Experience in image analytics - including feature extraction, object detection, classification, and tracking
- Experience in image manipulation
- PhD in Computer Vision , Machine Learning, Machine Vision or any related field is a must.
- Strong programming skills in Python, including NumPy, Scikit Learn, Pandas, and Matplotlib
- Self-governing analytical problem-solving skills for efficient and uninterrupted development of solutions
- Strong communications skills for an adequate description of technical concepts to others
Nice to have
- Experience in building APIs implementing ML models
- Knowledge or basic understanding of any Cloud ML technologies or Cloud ML service providers.
- Experience in the e-commerce industry
In 2018-19, the mobile games market in India generated over $600 million in revenues. With close to 450 people in its Mumbai and Bangalore offices, Games24x7 is India’s largest mobile games business today and is very well positioned to become the 800-pound gorilla of what will be a $2 billion market by 2022. While Games24x7 continues to invest aggressively in its India centric mobile games, it is also diversifying its business by investing in international gaming and other tech opportunities.
Summary of Role
Position/Role Description :
The candidate will be part of a team managing databases (MySQL, MongoDB, Cassandra) and will be involved in designing, configuring and maintaining databases.
Job Responsibilities:
• Complete involvement in the database requirement starting from the design phase for every project.
• Deploying required database assets on production (DDL, DML)
• Good understanding of MySQL Replication (Master-slave, Master-Master, GTID-based)
• Understanding of MySQL partitioning.
• A better understanding of MySQL logs and Configuration.
• Ways to schedule backup and restoration.
• Good understanding of MySQL versions and their features.
• Good understanding of InnoDB-Engine.
• Exploring ways to optimize the current environment and also lay a good platform for new projects.
• Able to understand and resolve any database related production outages.
Job Requirements:
• BE/B.Tech from a reputed institute
• Experience in python scripting.
• Experience in shell scripting.
• General understanding of system hardware.
• Experience in MySQL is a must.
• Experience in MongoDB, Cassandra, Graph db will be preferred.
• Experience with Pecona MySQL tools.
• 6 - 8 years of experience.
Job Location: Bengaluru
What you will be doing:
As a part of the Global Credit Risk and Data Analytics team, this person will be responsible for carrying out analytical initiatives which will be as follows: -
- Dive into the data and identify patterns
- Development of end-to-end Credit models and credit policy for our existing credit products
- Leverage alternate data to develop best-in-class underwriting models
- Working on Big Data to develop risk analytical solutions
- Development of Fraud models and fraud rule engine
- Collaborate with various stakeholders (e.g. tech, product) to understand and design best solutions which can be implemented
- Working on cutting-edge techniques e.g. machine learning and deep learning models
Example of projects done in past:
- Lazypay Credit Risk model using CatBoost modelling technique ; end-to-end pipeline for feature engineering and model deployment in production using Python
- Fraud model development, deployment and rules for EMEA region
Basic Requirements:
- 1-3 years of work experience as a Data scientist (in Credit domain)
- 2016 or 2017 batch from a premium college (e.g B.Tech. from IITs, NITs, Economics from DSE/ISI etc)
- Strong problem solving and understand and execute complex analysis
- Experience in at least one of the languages - R/Python/SAS and SQL
- Experience in in Credit industry (Fintech/bank)
- Familiarity with the best practices of Data Science
Add-on Skills :
- Experience in working with big data
- Solid coding practices
- Passion for building new tools/algorithms
- Experience in developing Machine Learning models