11+ Video compression Jobs in Bangalore (Bengaluru) | Video compression Job openings in Bangalore (Bengaluru)
Apply to 11+ Video compression Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Video compression Job opportunities across top companies like Google, Amazon & Adobe.
1. Expert in deep learning and machine learning techniques,
2. Extremely Good in image/video processing,
3. Have a Good understanding of Linear algebra, Optimization techniques, Statistics and pattern recognition.
Then u r the right fit for this position.
Job Responsibilities:
1. Develop/debug applications using Python.
2. Improve code quality and code coverage for existing or new program.
3. Deploy and Integrate the Machine Learning models.
4. Test and validate the deployments.
5. ML Ops function.
Technical Skills
1. Graduate in Engineering or Technology with strong academic credentials
2. 4 to 8 years of experience as a Python developer.
3. Excellent understanding of SDLC processes
4. Strong knowledge of Unit testing, code quality improvement
5. Cloud based deployment and integration of applications/micro services.
6. Experience with NoSQL databases, such as MongoDB, Cassandra
7. Strong applied statistics skills
8. Knowledge of creating CI/CD pipelines and touchless deployment.
9. Knowledge about API, Data Engineering techniques.
10. AWS
11. Knowledge of Machine Learning and Large Language Model.
Nice to Have
1. Exposure to financial research domain
2. Experience with JIRA, Confluence
3. Understanding of scrum and Agile methodologies
4. Experience with data visualization tools, such as Grafana, GGplot, etc
Skills and requirements
- Experience analyzing complex and varied data in a commercial or academic setting.
- Desire to solve new and complex problems every day.
- Excellent ability to communicate scientific results to both technical and non-technical team members.
Desirable
- A degree in a numerically focused discipline such as, Maths, Physics, Chemistry, Engineering or Biological Sciences..
- Hands on experience on Python, Pyspark, SQL
- Hands on experience on building End to End Data Pipelines.
- Hands on Experience on Azure Data Factory, Azure Data Bricks, Data Lake - added advantage
- Hands on Experience in building data pipelines.
- Experience with Bigdata Tools, Hadoop, Hive, Sqoop, Spark, SparkSQL
- Experience with SQL or NoSQL databases for the purposes of data retrieval and management.
- Experience in data warehousing and business intelligence tools, techniques and technology, as well as experience in diving deep on data analysis or technical issues to come up with effective solutions.
- BS degree in math, statistics, computer science or equivalent technical field.
- Experience in data mining structured and unstructured data (SQL, ETL, data warehouse, Machine Learning etc.) in a business environment with large-scale, complex data sets.
- Proven ability to look at solutions in unconventional ways. Sees opportunities to innovate and can lead the way.
- Willing to learn and work on Data Science, ML, AI.
About Us:
Small businesses are the backbone of the US economy, comprising almost half of the GDP and the private workforce. Yet, big banks don’t provide the access, assistance and modern tools that owners need to successfully grow their business.
We started Novo to challenge the status quo—we’re on a mission to increase the GDP of the modern entrepreneur by creating the go-to banking platform for small businesses (SMBs). Novo is flipping the script of the banking world, and we’re excited to lead the small business banking revolution.
At Novo, we’re here to help entrepreneurs, freelancers, startups and SMBs achieve their financial goals by empowering them with an operating system that makes business banking as easy as iOS. We developed modern bank accounts and tools to help to save time and increase cash flow. Our unique product integrations enable easy access to tracking payments, transferring money internationally, managing business transactions and more. We’ve made a big impact in a short amount of time, helping thousands of organizations access powerfully simple business banking.
We are looking for a Senior Data Scientist who is enthusiastic about using data and technology to solve complex business problems. If you're passionate about leading and helping to architect and develop thoughtful data solutions, then we want to chat. Are you ready to revolutionize the small business banking industry with us?
About the Role: (specific to the role-- describe the role activities/duties, who they interact with, what they are accountable for, how the role operates in the team, department and organization)
- Build and manage predictive models focussed on credit risk, fraud, conversions, churn, consumer behaviour etc
- Provides best practices, direction for data analytics and business decision making across multiple projects and functional areas
- Implements performance optimizations and best practices for scalable data models, pipelines and modelling
- Resolve blockers and help the team stay productive
- Take part in building the team and iterating on hiring processes
Requirements for the Role: (these are specific to the role-- technical skills and requirements to fulfill the job duties, certifications, years of experience, degree)
- 4+ years of experience in data science roles focussed on managing data processes, modelling and dashboarding
- Strong experience in python, SQL and in-depth understanding of modelling techniques
- Experience working with Pandas, scikit learn, visualization libraries like plotly, bokeh etc.
- Prior experience with credit risk modelling will be preferred
- Deep Knowledge of Python to write scripts to manipulate data and generate automated reports
How We Define Success: (these are specific to the role-- should be tied to performance management, OKRs or general goals)
- Expand access to data driven decision making across the organization
- Solve problems in risk, marketing, growth, customer behaviour through analytics models that increase efficacy
Nice To Have, but Not Required:
- Experience in dashboarding libraries like Python Dash and exposure to CI/CD
- Exposure to big data tools like Spark, and some core tech knowledge around API’s, data streaming etc.
Novo values diversity as a core tenant of the work we do and the businesses we serve. We are an equal opportunity employer, indiscriminate of race, religion, ethnicity, national origin, citizenship, gender, gender identity, sexual orientation, age, veteran status, disability, genetic information or any other protected characteristic.
Responsibilities:
- Should act as a technical resource for the Data Science team and be involved in creating and implementing current and future Analytics projects like data lake design, data warehouse design, etc.
- Analysis and design of ETL solutions to store/fetch data from multiple systems like Google Analytics, CleverTap, CRM systems etc.
- Developing and maintaining data pipelines for real time analytics as well as batch analytics use cases.
- Collaborate with data scientists and actively work in the feature engineering and data preparation phase of model building
- Collaborate with product development and dev ops teams in implementing the data collection and aggregation solutions
- Ensure quality and consistency of the data in Data warehouse and follow best data governance practices
- Analyse large amounts of information to discover trends and patterns
- Mine and analyse data from company databases to drive optimization and improvement of product development, marketing techniques and business strategies.\
Requirements
- Bachelor’s or Masters in a highly numerate discipline such as Engineering, Science and Economics
- 2-6 years of proven experience working as a Data Engineer preferably in ecommerce/web based or consumer technologies company
- Hands on experience of working with different big data tools like Hadoop, Spark , Flink, Kafka and so on
- Good understanding of AWS ecosystem for big data analytics
- Hands on experience in creating data pipelines either using tools or by independently writing scripts
- Hands on experience in scripting languages like Python, Scala, Unix Shell scripting and so on
- Strong problem solving skills with an emphasis on product development.
- Experience using business intelligence tools e.g. Tableau, Power BI would be an added advantage (not mandatory)
- Perform research and development on Machine Learning specifically in the areas of Speech Recognition, Digital signal processing, audio signal processing, NaturalLanguage processing, Natural Language Understanding
- Read and keep up with the research in Speech recognition, Machine Learning, Deep
- Understand and implement research papers to the business problem and build the
- Contribute to applied research and open source community
- Mentor and guide team members
YOU'LL BE OUR : Data Scientist YOU'LL BE BASED AT: IBC Knowledge Park, Bangalore
YOU'LL BE ALIGNED WITH :Engineering Manager
YOU'LL BE A MEMBER OF : Data Intelligence
WHAT YOU'LL DO AT ATHER:
-
Work with the vehicle intelligence platform to evolve the algorithms and the platform enhancing ride experience.
-
Provide data driven solutions from simple to fairly complex insights on the data collected from the vehicle
-
Identify measures and metrics that could be used insightfully to make decisions across firmware components and productionize these.
-
Support the data science lead and manager and partner in fairly intensive projects around diagnostics, predictive modeling, BI and Engineering data sciences.
-
Build and automate scripts that could be re-used efficiently.
-
Build interactive reports/dashboards that could be re-used across engineering teams for their discussions/ explorations iteratively
-
Support monitoring, measuring the success of algorithms and features build and lead innovation through objective reasoning and thinking Engage with the data science lead and the engineering team stakeholders on the solution approach and draft a plan of action.
-
Contribute to product/team roadmap by generating and implementing innovative data and analysis based ideas as product features
-
Handhold/Guide team in successful conceptualization and implementation of key product differentiators through effective benchmarking.
HERE'S WHAT WE ARE LOOKING FOR :
• Good understanding of C++, Golang programming skills and system architecture understanding
• Experience with IOT, telemetry will be a plus
• Proficient in R markdown/ Python/ Grafana
• Proficient in SQL and No-SQL
• Proficient in R / Python programming
• Good understanding of ML techniques/ Sparks ML
YOU BRING TO ATHER:
• B.E/B.Tech preferably in Computer Science
• 3 to 5 yrs of work experience as Data Scientist
- Experience with relational SQL & NoSQL databases including MySQL & MongoDB.
- Familiar with the basic principles of distributed computing and data modeling.
- Experience with distributed data pipeline frameworks like Celery, Apache Airflow, etc.
- Experience with NLP and NER models is a bonus.
- Experience building reusable code and libraries for future use.
- Experience building REST APIs.
Preference for candidates working in tech product companies
· Advanced Spark Programming Skills · Advanced Python Skills · Data Engineering ETL and ELT Skills · Expertise on Streaming data · Experience in Hadoop eco system · Basic understanding of Cloud Platforms · Technical Design Skills, Alternative approaches |
· Hands on expertise on writing UDF’s · Hands on expertise on streaming data ingestion · Be able to independently tune spark scripts · Advanced Debugging skills & Large Volume data handling. · Independently breakdown and plan technical Tasks |