
Good experience as team lead.
Great english communication and fluent communication experience
Technically strong, strong in Analytical Skills, Strong problem-solving capabilities
Server Knowledge and strong in GIT, GITLAB, SVN Knowledge
Sprint planning and scrum management
Experience making documentation like technical documents of project
Helping developers came out of terms
Strong R & D Skills, scrum
Strong database knowledge of MYSQL, PostgreSql, MongoDB
Strong Expertise making scalable applications
Knowledge of different software architecture and where to use which architecture etc. also design patterns.

About kanhasoft
About
As a prominent https://www.kanhasoft.com/web-app-development.html/">web development company in India, our core expertise lies in custom software development for web and mobile app development according to client’s need and business workflow. we assist our clients through every step of the way; from consultation, design, development to maintenance. Leave it all in the hands of our expert https://www.kanhasoft.com/hire-resources.html">web developers and we will build you secure, innovative and responsive web applications.
Similar jobs


Job Title – Data Scientist (Forecasting)
Anicca Data is seeking a Data Scientist (Forecasting) who is motivated to apply his/her/their skill set to solve complex and challenging problems. The focus of the role will center around applying deep learning models to real-world applications. The candidate should have experience in training, testing deep learning architectures. This candidate is expected to work on existing codebases or write an optimized codebase at Anicca Data. The ideal addition to our team is self-motivated, highly organized, and a team player who thrives in a fast-paced environment with the ability to learn quickly and work independently.
Job Location: Remote (for time being) and Bangalore, India (post-COVID crisis)
Required Skills:
- At least 3+ years of experience in a Data Scientist role
- Bachelor's/Master’s degree in Computer Science, Engineering, Statistics, Mathematics, or similar quantitative discipline. D. will add merit to the application process
- Experience with large data sets, big data, and analytics
- Exposure to statistical modeling, forecasting, and machine learning. Deep theoretical and practical knowledge of deep learning, machine learning, statistics, probability, time series forecasting
- Training Machine Learning (ML) algorithms in areas of forecasting and prediction
- Experience in developing and deploying machine learning solutions in a cloud environment (AWS, Azure, Google Cloud) for production systems
- Research and enhance existing in-house, open-source models, integrate innovative techniques, or create new algorithms to solve complex business problems
- Experience in translating business needs into problem statements, prototypes, and minimum viable products
- Experience managing complex projects including scoping, requirements gathering, resource estimations, sprint planning, and management of internal and external communication and resources
- Write C++ and Python code along with TensorFlow, PyTorch to build and enhance the platform that is used for training ML models
Preferred Experience
- Worked on forecasting projects – both classical and ML models
- Experience with training time series forecasting methods like Moving Average (MA) and Autoregressive Integrated Moving Average (ARIMA) with Neural Networks (NN) models as Feed-forward NN and Nonlinear Autoregressive
- Strong background in forecasting accuracy drivers
- Experience in Advanced Analytics techniques such as regression, classification, and clustering
- Ability to explain complex topics in simple terms, ability to explain use cases and tell stories

- Discussing project aims with the client and development team.
- Designing and building web applications using Laravel.
- Troubleshooting issues in the implementation and debug builds.
- Working with front-end and back-end developers on projects.
- Testing functionality for users and the backend.
- Ensuring that integrations run smoothly.
- Scaling projects based on client feedback.
- Recording and reporting on work done in Laravel.
- Maintaining web-based applications.
- Presenting work in meetings with clients and management.

Must have
- Experience with React and Redux or similar programming
- Experience in Rest API integration
- Passionate about the user experience
- Android and IOS development
- Other requirements
- Good knowledge of Javascript functions and operators
- Responsibility for major tasks and chores
- Willing to resolve technical debt in the early step
- Good understanding of sprint cycle in agile
- Can communicate with developers and non-developers
- Serverless API experience is not counted
- Good to have Git workflow knowledge
- Good to have Github collaboration experience
•Candidate should have basic communication and good convincing power.
•Candidate should be comfortable with calling.
•Lead generation, Ability to work in a team or individually as and when required.
•Outstanding problem solving skills.
•Have great interpersonal skills.
•Candidate should have strong organizational skills.
We are a learner-centric global EdTech company that transforms people's passion into profession through new-age, future-ready and industry-relevant education. We equip learners with cutting-edge skills and qualifications so they can #StayRelevant in a transforming global landscape.
We are looking for an Admission Counsellor who can join our young and vibrant team. If you are someone who has the gift of the gab and wants to transform the higher education sector, we would love to hear from you. As part of our team, you will be in the forefront of transformative initiatives in higher education and a valuable resource who will directly contribute to and benefit from our growth.
Roles and Responsibilities:
● Connect daily with 80-100 working professionals on calls accordingly and make prospectives via CRM database provided to every individual.(Minimum 120 minutes of talktime.)
● Understand each student's area of interest, build rapport and identify avenues of upskilling.
● Provide comprehensive information about available programs to students via phone calls, emails, WhatsApp, etc.
● Diligently complete daily tasks and achieve monthly targets. Qualifications:
● Excellent communication skills and the ability to steer conversations. ● Bachelor's degree in any discipline.
● Candidates from the Education, Insurance, EdTech, BPO, KPO and BFSI sectors will be given preference.
● Freshers can also apply.

What you will do:
- Leveraging your deep knowledge to provide technical leadership to take projects from zero to completion
- Architecting, building and maintaining scalable data pipelines and accessing patterns related to permissions and security
- Researching, evaluating and utilising new technologies/tools/frameworks centred around high-volume data processing
- Involving in building and deploying large scale data processing pipelines in a production environment
- Working with data scientists and other engineers to develop data pipelines for model development and productization
- Identifying gaps and implementing solutions for data security, quality and automation of processes
- Providing inputs on right tool options and model designs for use cases
- Identifying gaps and implementing solutions for data security, quality and automation of processes
- Designing scalable implementations of the models developed by our Data Scientists
Desired Candidate Profile
What you need to have:- 3+ years strong programming experience in PySpark and Python
- Knowledge in Python, SQL, Spark (Pyspark)
- Exposure to AWS/ Azure cloud tools and services like S3 Athena, Apache Nifi, Apache Airflow
- Analytical and problem-solving skills
- Knowledge in Scrum & code sharing Tech: Git, Jira
- Experience related to processing frameworks such as Spark, Spark Streaming, Hive, Sqoop, Kafka etc
- Deep understanding of measuring and ensuring data quality at scale and the required tooling to monitor and optimise the performance of our data pipelines
- Experience building data pipelines and data-centric applications using distributed storage platforms and shipping data production pipelines sourcing data from a diverse array of sources
Job Description |
|
|
|


