Position Title:Customer Success Regional SPOC
Experience:8+ Year
Location:Bhopal, Indore, Hyderabad,Jaipur
Skill set: B2B Sales, Finacial Saas, Edu Fintech,
- Responsible for Embedded Stack - implementation between Institutes and LEO1
- Single point of support for ensuring seamless onboarding of students & coordination.
- Conducts Training (Product training to Institute management / training to institute staff/
- Review Calendars training / Onboarding student training)
- Responsible for communications between both the entities
- Responsible for ensuring weekly updates and monthly reviews are conducted
- Ensures to coordinate between the teams for all pointers mentioned on the Institute track Check sheet.
- He/she will ensure the review calendar rhythm is maintained and accomplished.
- Monthly calendar for institute visits and ensuring travel to institutes for monthly reviews
- Sharing data templates on time to institutes
- Assisting in uploading/ loading details
- Ensures Cards dispatch for his/ her institutes
- Ensure coordination to upload Fee structure from backend LEO1 teams
- Collate feedbacks and address them on war footing.
- Will be responsible for Lead traction, business conversions , tie up withs local vendors, banks etc and points of card usage
Required Key Skills :
- The role requires the person to interact with all top institutes, and their senior
- management teams we will need a seasoned resource.
- To be excellent in communicating skills and training skills
- They should be able to convince teams and staff on the products and get buy in.
- Should be open to travelling
About Arting Digital
Similar jobs
Responsibilities:
- Implement solutions on the Microsoft Power Platform, Power Apps, and Dynamics 365
- Primary responsibilities will be designing, prototyping, and supporting of testing, development and refactoring end-to-end applications on ever-changing modern technology platforms in public/hybrid cloud environments.
- Use multiple OOTB Connectors with PowerApps and Flow, and preferably create custom connectors for PowerApps and Microsoft Flow.
- Re-write and re-engineer custom legacy applications to PowerApps solutions.
- Adept at leveraging new approaches to solutions for system design and functionality.
- Interpret and design database models (MYSQL, SQL Server, etc. ).
- Address and remediate security vulnerability findings in PowerApps.
- Create documentation for projects including design, asset inventory, diagrams, and presentations as well as documentation of the environment and deployment for our support and future implementations teams.
- Communicate proposed designs and progress on the work to customers, team leads, and team members.
- Support cross-functional project teams consisting of app development, IT operations, and information security
Requirements:
- Minimum 2 years of experience in PowerApps
- Bachelors or masters in computer science or any related field
- Microsoft Certification in PowerApps will be an added advantage
- Excellent UI design skill and hands on experience in designing and developing entities in PowerApps.
- Good exposure to Common data services.
- Good Experience in development of Business process Flow and work independently on Java spring boot and database side. Can you please look for 6+ year
Building the machine learning production System(or MLOps) is the biggest challenge most large companies currently have in making the transition to becoming an AI-driven organization. This position is an opportunity for an experienced, server-side developer to build expertise in this exciting new frontier. You will be part of a team deploying state-ofthe-art AI solutions for Fractal clients.
Responsibilities
As MLOps Engineer, you will work collaboratively with Data Scientists and Data engineers to deploy and operate advanced analytics machine learning models. You’ll help automate and streamline Model development and Model operations. You’ll build and maintain tools for deployment, monitoring, and operations. You’ll also troubleshoot and resolve issues in development, testing, and production environments.
- Enable Model tracking, model experimentation, Model automation
- Develop scalable ML pipelines
- Develop MLOps components in Machine learning development life cycle using Model Repository (either of): MLFlow, Kubeflow Model Registry
- Machine Learning Services (either of): Kubeflow, DataRobot, HopsWorks, Dataiku or any relevant ML E2E PaaS/SaaS
- Work across all phases of Model development life cycle to build MLOPS components
- Build the knowledge base required to deliver increasingly complex MLOPS projects on Azure
- Be an integral part of client business development and delivery engagements across multiple domains
Required Qualifications
- 5.5-9 years experience building production-quality software
- B.E/B.Tech/M.Tech in Computer Science or related technical degree OR equivalent
- Strong experience in System Integration, Application Development or Datawarehouse projects across technologies used in the enterprise space
- Expertise in MLOps, machine learning and docker
- Object-oriented languages (e.g. Python, PySpark, Java, C#, C++)
- Experience developing CI/CD components for production ready ML pipeline.
- Database programming using any flavors of SQL
- Knowledge of Git for Source code management
- Ability to collaborate effectively with highly technical resources in a fast-paced environment
- Ability to solve complex challenges/problems and rapidly deliver innovative solutions
- Team handling, problem solving, project management and communication skills & creative thinking
- Foundational Knowledge of Cloud Computing on Azure
- Hunger and passion for learning new skills
Responsibilities :
- Provide Support Services to our Gold & Enterprise customers using our flagship product suits. This may include assistance provided during the engineering and operations of distributed systems as well as responses for mission-critical systems and production customers.
- Lead end-to-end delivery and customer success of next-generation features related to scalability, reliability, robustness, usability, security, and performance of the product
- Lead and mentor others about concurrency, parallelization to deliver scalability, performance, and resource optimization in a multithreaded and distributed environment
- Demonstrate the ability to actively listen to customers and show empathy to the customer’s business impact when they experience issues with our products
Requires Skills :
- 10+ years of Experience with a highly scalable, distributed, multi-node environment (100+ nodes)
- Hadoop operation including Zookeeper, HDFS, YARN, Hive, and related components like the Hive metastore, Cloudera Manager/Ambari, etc
- Authentication and security configuration and tuning (KNOX, LDAP, Kerberos, SSL/TLS, second priority: SSO/OAuth/OIDC, Ranger/Sentry)
- Java troubleshooting, e.g., collection and evaluation of jstacks, heap dumps
- Linux, NFS, Windows, including application installation, scripting, basic command line
- Docker and Kubernetes configuration and troubleshooting, including Helm charts, storage options, logging, and basic kubectl CLI
- Experience working with scripting languages (Bash, PowerShell, Python)
- Working knowledge of application, server, and network security management concepts
- Familiarity with virtual machine technologies
- Knowledge of databases like MySQL and PostgreSQL,
- Certification on any of the leading Cloud providers (AWS, Azure, GCP ) and/or Kubernetes is a big plus
- As a Senior Software Engineer you will work closely with the Tech Lead, and the rest of the engineering team to build and scale a data-driven platform. This role will act as a great springboard to accelerate career growth & transition into a Tech Lead level role.
- Your primary focus will be the development of server-side logic, building new services and APIs, developing UI components, supporting the maintenance of current APIs, reviewing work, and improving the performance and reliability of our systems as we rapidly scale our product and organization.
- An effective Senior Software Engineer will be a self-motivated learner; a highly creative engineer having obsessive attention towards detail and thoroughness.
Must Haves
- You’ve been building web applications professionally for 5+ years
- You’re proficient in NodeJS, TypeScript, PostgreSQL, and MongoDB
- You possess strong OOP and design pattern knowledge
- Familiar with modern engineering practices: Code Reviews, Continuous Deployment, Automated Testing, etc
- You write understandable, testable code with an eye towards maintainability and experienced with TDD (Test Driven Development)
- You’ve the ability to build RESTful APIs
- Explaining complex technical concepts to designers, support team, and fellow engineers is not a problem for you
- Well versed with the fundamentals of computer science and distributed systems
Nice-to-haves
- FrontEnd experience; have built applications in either: React, Vue, Angular, or Svelte
- Startup experience, preferably a tech startup
- Open Source contributor
- You have experience with other programming languages -- e.g. Python, Ruby, PHP, Go, C, etc.
- Passionate about/experienced with open source and developer tools
- You have a bachelor's degree in Computer Science, Engineering or related field, or equivalent training, fellowship, or work experience
Benefits
- Feel good factor / impact driven work- Be part of the journey that is creating an impact in people’s lives after retirement. The work we do has a direct impact on the size of someone’s retirement pot
- Ownership driven work- you own problems within the organisation and not an implementer of someone else’s idea. You have the ability to design solutions and work with multiple stakeholders to implement them
- Job stability- ClearGlass is very well funded by top tier VCs and can provide job stability in the COVID world
- Stock Options- we would love every employee of ClearGlass to have stock options
- Top-of-the-line gear- to help you be super-productive while working. No one likes a slow laptop
- Flexible work environment- we are working from home currently but will slowly move to a hybrid set-up; we offer support (incl. financial) to set-up your home office
- Proficient in Java, Node or Python
- Experience with NewRelic, Splunk, SignalFx, DataDog etc.
- Monitoring and alerting experience
- Full stack development experience
- Hands-on with building and deploying micro services in Cloud (AWS/Azure)
- Experience with terraform w.r.t Infrastructure As Code
- Should have experience troubleshooting live production systems using monitoring/log analytics tools
- Should have experience leading a team (2 or more engineers)
- Experienced using Jenkins or similar deployment pipeline tools
- Understanding of distributed architectures
Indium Software is a niche technology solutions company with deep expertise in Digital , QA and Gaming. Indium helps customers in their Digital Transformation journey through a gamut of solutions that enhance business value.
With over 1000+ associates globally, Indium operates through offices in the US, UK and India
Visit http://www.indiumsoftware.com">www.indiumsoftware.com to know more.
Job Title: Analytics Data Engineer
What will you do:
The Data Engineer must be an expert in SQL development further providing support to the Data and Analytics in database design, data flow and analysis activities. The position of the Data Engineer also plays a key role in the development and deployment of innovative big data platforms for advanced analytics and data processing. The Data Engineer defines and builds the data pipelines that will enable faster, better, data-informed decision-making within the business.
We ask:
Extensive Experience with SQL and strong ability to process and analyse complex data
The candidate should also have an ability to design, build, and maintain the business’s ETL pipeline and data warehouse The candidate will also demonstrate expertise in data modelling and query performance tuning on SQL Server
Proficiency with analytics experience, especially funnel analysis, and have worked on analytical tools like Mixpanel, Amplitude, Thoughtspot, Google Analytics, and similar tools.
Should work on tools and frameworks required for building efficient and scalable data pipelines
Excellent at communicating and articulating ideas and an ability to influence others as well as drive towards a better solution continuously.
Experience working in python, Hive queries, spark, pysaprk, sparkSQL, presto
- Relate Metrics to product
- Programmatic Thinking
- Edge cases
- Good Communication
- Product functionality understanding
Perks & Benefits:
A dynamic, creative & intelligent team they will make you love being at work.
Autonomous and hands-on role to make an impact you will be joining at an exciting time of growth!
Flexible work hours and Attractive pay package and perks
An inclusive work environment that lets you work in the way that works best for you!
Experience with Node.js (Loopback / Express)
Understanding design principles behind a scalable application
Implementing automated testing platforms and unit tests
Basic understanding of web markup, including HTML5 and CSS3
Write, debug, and deploy code to production
Strong experience with object-oriented programing
Strong fundamentals in Data Structures and Algorithms.
Responsible for analysis of current tasks, analyze, design and develop the code
We are sensitive for timely delivery of different sprint development milestones.
Proficient knowledge of cross-browser compatibility issues and ways to work around such issues.
Proficient understanding of code versioning tools, such as Git, Mercurial, or SVN.