Job Responsibilities:-
- Develop robust, scalable and maintainable machine learning models to answer business problems against large data sets.
- Build methods for document clustering, topic modeling, text classification, named entity recognition, sentiment analysis, and POS tagging.
- Perform elements of data cleaning, feature selection and feature engineering and organize experiments in conjunction with best practices.
- Benchmark, apply, and test algorithms against success metrics. Interpret the results in terms of relating those metrics to the business process.
- Work with development teams to ensure models can be implemented as part of a delivered solution replicable across many clients.
- Knowledge of Machine Learning, NLP, Document Classification, Topic Modeling and Information Extraction with a proven track record of applying them to real problems.
- Experience working with big data systems and big data concepts.
- Ability to provide clear and concise communication both with other technical teams and non-technical domain specialists.
- Strong team player; ability to provide both a strong individual contribution but also work as a team and contribute to wider goals is a must in this dynamic environment.
- Experience with noisy and/or unstructured textual data.
knowledge graph and NLP including summarization, topic modelling etc
- Strong coding ability with statistical analysis tools in Python or R, and general software development skills (source code management, debugging, testing, deployment, etc.)
- Working knowledge of various text mining algorithms and their use-cases such as keyword extraction, PLSA, LDA, HMM, CRF, deep learning & recurrent ANN, word2vec/doc2vec, Bayesian modeling.
- Strong understanding of text pre-processing and normalization techniques, such as tokenization,
- POS tagging and parsing and how they work at a low level.
- Excellent problem solving skills.
- Strong verbal and written communication skills
- Masters or higher in data mining or machine learning; or equivalent practical analytics / modelling experience
- Practical experience in using NLP related techniques and algorithms
- Experience in open source coding and communities desirable.
Able to containerize Models and associated modules and work in a Microservices environment
About Srijan Technologies
Srijan is today the largest pure-play Drupal agency in Asia. Srijan specializes in building high-traffic websites and complex web applications in Drupal and has been serving clients across USA, Asia, Europe, Australia and the Middle East.
Srijan Technologies is a 17-year-old technology services firm.
For a large part of its life, Srijan has specialised in building content management systems with expertise in PHP-based open-source CMS’, specifically Drupal. In recent years Srijan has diversified into
i) Data Engineering using NodeJS and Python,
ii) Data Science -- Analytics and Machine Learning and
iii) API Management using APIGEE.
Services offered by Srijan:-
Digital experience brings content management systems that mirror the way your organization should work.
Product Engineering bridges the gap between concept and market, and
Platform Modernisation will create modular, flexible infrastructures for your business that anticipate change.
Similar jobs
companies uncover the 3% of active buyers in their target market. It evaluates
over 100 billion data points and analyzes factors such as buyer journeys, technology
adoption patterns, and other digital footprints to deliver market & sales intelligence.
Its customers have access to the buying patterns and contact information of
more than 17 million companies and 70 million decision makers across the world.
Role – Data Engineer
Responsibilities
Work in collaboration with the application team and integration team to
design, create, and maintain optimal data pipeline architecture and data
structures for Data Lake/Data Warehouse.
Work with stakeholders including the Sales, Product, and Customer Support
teams to assist with data-related technical issues and support their data
analytics needs.
Assemble large, complex data sets from third-party vendors to meet business
requirements.
Identify, design, and implement internal process improvements: automating
manual processes, optimizing data delivery, re-designing infrastructure for
greater scalability, etc.
Build the infrastructure required for optimal extraction, transformation, and
loading of data from a wide variety of data sources using SQL, Elasticsearch,
MongoDB, and AWS technology.
Streamline existing and introduce enhanced reporting and analysis solutions
that leverage complex data sources derived from multiple internal systems.
Requirements
5+ years of experience in a Data Engineer role.
Proficiency in Linux.
Must have SQL knowledge and experience working with relational databases,
query authoring (SQL) as well as familiarity with databases including Mysql,
Mongo, Cassandra, and Athena.
Must have experience with Python/Scala.
Must have experience with Big Data technologies like Apache Spark.
Must have experience with Apache Airflow.
Experience with data pipeline and ETL tools like AWS Glue.
Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
About the company:
VakilSearch is a technology-driven platform, offering services that cover the legal needs of startups and established businesses. Some of our services include incorporation, government registrations & filings, accounting, documentation and annual compliances. In addition, we offer a wide range of services to individuals, such as property agreements and tax filings. Our mission is to provide one-click access to individuals and businesses for all their legal and professional needs.
You can learn more about us at vakilsearch.com .
About the role:
A successful data analyst needs to have a combination of technical as well leadership skills. A background in Mathematics, Statistics, Computer Science, Information Management can serve as a solid foundation to build your career as a data analyst at VakilSearch.
Why to join Vakilsearch:
- Unlimited opportunities to grow
- Flat hierarchy
- Encouraging environment to unleash your out of box thinking skills
Responsibilities:
- Preparing reports for the stakeholders and the management, enabling them to take important decisions based on various facts and trends.
- Using automated tools to extract data from primary and secondary sources
- Identify and recommend the right product metrics to be analysed and tracked for every feature/problem statement.
- Using statistical tools to identify, analyze, and interpret patterns and trends in complex data sets that could be helpful for the diagnosis and prediction
- Working with programmers, engineers, and management heads to identify process improvement opportunities, propose system modifications, and devise data governance strategies.
Required skills:
- Bachelor’s degree from an accredited university or college in computer science or graduate from data science related program
- Minimum of 0 - 2 years experience in analysing
- Design, build web crawlers to scrape data and URLs.
- Integrate the data crawled and scraped into our databases
- Create more/better ways to crawl relevant information
- Strong knowledge of web technologies (HTML, CSS, Javascript, XPath, Regex)
- Understanding of data privacy policies (esp. GDPR) and personally identifiable information
- Develop automated and reusable routines for extracting information from various data sources
- Prepare requirement summary and re-confirm with Operation team
- Translate business requirements into specific solutions
- Ability to relay technical information to non-technical users
- Demonstrate Effective problem solving and analytical skill
- Ability to pay attention to detail, pro-active, critical thinking and accuracy is essential
- Ability to work to deadlines and give realistic estimates
Skills & Expertise
- 2+ years of web scraping experience
- Experience with two or more of the following web scraping frameworks and tools: Selenium, Scrapy, Import.io, Webhose.io, ScrapingHub, ParseHub, Phantombuster, Octoparse, Puppeter, etc.
- Basic knowledge of data engineering (database ingestion, ETL, etc.)
- Solution orientation and "can do" attitude - with a desire to tackle complex problems.
About LodgIQ
LodgIQ is led by a team of experienced hospitality technology experts, data scientists and product domain experts. Seed funded by Highgate Ventures, a venture capital platform focused on early stage technology investments in the hospitality industry and Trilantic Capital Partners, a global private equity firm, LodgIQ has made a significant investment in advanced machine learning platforms and data science.
Title : Data Scientist
Job Description:
- Apply Data Science and Machine Learning to a REAL-LIFE problem - “Predict Guest Arrivals and Determine Best Prices for Hotels”
- Apply advanced analytics in a BIG Data Environment – AWS, MongoDB, SKLearn
- Help scale up the product in a global offering across 100+ global markets
Qualifications:
- Minimum 3 years of experience with advanced data analytic techniques, including data mining, machine learning, statistical analysis, and optimization. Student projects are acceptable.
- At least 1 year of experience with Python / Numpy / Pandas / Scipy/ MatPlotLib / Scikit-Learn
- Experience in working with massive data sets, including structured and unstructured with at least 1 prior engagement involving data gathering, data cleaning, data mining, and data visualization
- Solid grasp over optimization techniques
- Master's or PhD degree in Business Analytics. Data science, Statistics or Mathematics
- Ability to show a track record of solving large, complex problems
Specialism- Advance Analytics, Data Science, regression, forecasting, analytics, SQL, R, python, decision tree, random forest, SAS, clustering classification
Senior Analytics Consultant- Responsibilities
- Understand business problem and requirements by building domain knowledge and translate to data science problem
- Conceptualize and design cutting edge data science solution to solve the data science problem, apply design thinking concepts
- Identify the right algorithms , tech stack , sample outputs required to efficiently adder the end need
- Prototype and experiment the solution to successfully demonstrate the value
Independently or with support from team execute the conceptualized solution as per plan by following project management guidelines - Present the results to internal and client stakeholder in an easy to understand manner with great story telling, story boarding, insights and visualization
- Help build overall data science capability for eClerx through support in pilots, pre sales pitches, product development , practice development initiatives
Role Summary
We Are looking for an analytically inclined, Insights Driven Product Analyst to make our organisation more data driven. In this role you will be responsible for creating dashboards to drive insights for product and business teams. Be it Day to Day decisions as well as long term impact assessment, Measuring the Efficacy of different products or certain teams, You'll be Empowering each of them. The growing nature of the team will require you to be in touch with all of the teams at upgrad. Are you the "Go-To" person everyone looks at for getting Data, Then this role is for you.
Roles & Responsibilities
- Lead and own the analysis of highly complex data sources, identifying trends and patterns in data and provide insights/recommendations based on analysis results
- Build, maintain, own and communicate detailed reports to assist Marketing, Growth/Learning Experience and Other Business/Executive Teams
- Own the design, development, and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions.
- Analyze data and generate insights in the form of user analysis, user segmentation, performance reports, etc.
- Facilitate review sessions with management, business users and other team members
- Design and create visualizations to present actionable insights related to data sets and business questions at hand
- Develop intelligent models around channel performance, user profiling, and personalization
Skills Required
- Having 4-6 yrs hands-on experience with Product related analytics and reporting
- Experience with building dashboards in Tableau or other data visualization tools such as D3
- Strong data, statistics, and analytical skills with a good grasp of SQL.
- Programming experience in Python is must
- Comfortable managing large data sets
- Good Excel/data management skills
We are looking for passionate, talented and super-smart engineers to join our product development team. If you are someone who innovates, loves solving hard problems, and enjoys end-to-end product development, then this job is for you! You will be working with some of the best developers in the industry in a self-organising, agile environment where talent is valued over job title or years of experience.
Responsibilities:
- You will be involved in end-to-end development of VIMANA technology, adhering to our development practices and expected quality standards.
- You will be part of a highly collaborative Agile team which passionately follows SAFe Agile practices, including pair-programming, PR reviews, TDD, and Continuous Integration/Delivery (CI/CD).
- You will be working with cutting-edge technologies and tools for stream processing using Java, NodeJS and Python, using frameworks like Spring, RxJS etc.
- You will be leveraging big data technologies like Kafka, Elasticsearch and Spark, processing more than 10 Billion events per day to build a maintainable system at scale.
- You will be building Domain Driven APIs as part of a micro-service architecture.
- You will be part of a DevOps culture where you will get to work with production systems, including operations, deployment, and maintenance.
- You will have an opportunity to continuously grow and build your capabilities, learning new technologies, languages, and platforms.
Requirements:
- Undergraduate degree in Computer Science or a related field, or equivalent practical experience.
- 2 to 5 years of product development experience.
- Experience building applications using Java, NodeJS, or Python.
- Deep knowledge in Object-Oriented Design Principles, Data Structures, Dependency Management, and Algorithms.
- Working knowledge of message queuing, stream processing, and highly scalable Big Data technologies.
- Experience in working with Agile software methodologies (XP, Scrum, Kanban), TDD and Continuous Integration (CI/CD).
- Experience using no-SQL databases like MongoDB or Elasticsearch.
- Prior experience with container orchestrators like Kubernetes is a plus.
We build products and platforms for the Industrial Internet of Things. Our technology is being used around the world in mission-critical applications - from improving the performance of manufacturing plants, to making electric vehicles safer and more efficient, to making industrial equipment smarter.
Please visit https://govimana.com/ to learn more about what we do.
Why Explore a Career at VIMANA
- We recognize that our dedicated team members make us successful and we offer competitive salaries.
- We are a workplace that values work-life balance, provides flexible working hours, and full time remote work options.
- You will be part of a team that is highly motivated to learn and work on cutting edge technologies, tools, and development practices.
- Bon Appetit! Enjoy catered breakfasts, lunches and free snacks!
VIMANA Interview Process
We usually target to complete all the interviews in a week's time and would provide prompt feedback to the candidate. As of now, all the interviews are conducted online due to covid situation.
1.Telephonic screening (30 Min )
A 30 minute telephonic interview to understand and evaluate the candidate's fit with the job role and the company.
Clarify any queries regarding the job/company.
Give an overview about further interview rounds
2. Technical Rounds
This would be deep technical round to evaluate the candidate's technical capability pertaining to the job role.
3. HR Round
Candidate's team and cultural fit will be evaluated during this round
We would proceed with releasing the offer if the candidate clears all the above rounds.
Note: In certain cases, we might schedule additional rounds if needed before releasing the offer.
Required Experience: 5 - 7 Years
Skills : ADF, Azure, SSIS, python
Job Description
Azure Data Engineer with hands on SSIS migrations and ADF expertise.
Roles & Responsibilities
•Overall, 6+ years’ experience in Cloud Data Engineering, with hands on experience in ADF (Azure Data Factory) is required.
Hands on experience with SSIS to ADF migration is preferred.
SQL Server Integration Services (SSIS) workloads to SSIS in ADF. ( Must have done at least one migration)
Hands on experience implementing Azure Data Factory frameworks, scheduling, and performance tuning.
Hands on experience in migrating SSIS solutions to ADF
Hands on experience in ADF coding side.
Hands on experience with MPP Database architecture
Hands on experience in python